The OWLv2 implements a wrapper for the OWLv2 model, which detects objects
in RGB images based on a text prompt.
If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.
Confidence threshold for bounding box detection.
The input RGB image of shape (M,N,3). Text prompt for object detection. Multiple prompts can be separated by a ”,”.
Returns
List[float], List[float], List[str]
Returns three lists: bounding boxes coordinates, confidence scores, and label strings.
from grid.model.perception.detection.owlv2 import OWLv2
car = AirGenCar()
# We will be capturing an image from the AirGen simulator
# and run model inference on it.
img = car.getImage("front_center", "rgb").data
model = OWLv2(use_local = False)
box, scores, labels = model.run(rgbimage=img, prompt=<prompt>)
print(box, scores, labels)
## if you want to use the model locally, set use_local=True
model = OWLv2(use_local = True)
box, scores, labels = model.run(rgbimage=img, prompt=<prompt>)
print(box, scores, labels)
This code is licensed under the Apache 2.0 License.