from grid.model.perception.detection.gdino import GroundingDINOcar = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.img = car.getImage("front_center", "rgb").datamodel = GroundingDINO(use_local = False)box, scores, labels = model.run(rgbimage=img, prompt=<prompt>)print(box, scores, labels)## if you want to use the model locally, set use_local=Truemodel = GroundingDINO(use_local = True)box, scores, labels = model.run(rgbimage=img, prompt=<prompt>)print(box, labels)
The GroundingDINO implements a wrapper for the GroundingDINO model, which detects objects in RGB images based on text prompts.
Returns three lists: bounding boxes coordinates, confidence scores, and label strings.
Copy
Ask AI
from grid.model.perception.detection.gdino import GroundingDINOcar = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.img = car.getImage("front_center", "rgb").datamodel = GroundingDINO(use_local = False)box, scores, labels = model.run(rgbimage=img, prompt=<prompt>)print(box, scores, labels)## if you want to use the model locally, set use_local=Truemodel = GroundingDINO(use_local = True)box, scores, labels = model.run(rgbimage=img, prompt=<prompt>)print(box, labels)
This code is licensed under the Apache 2.0 License.
from grid.model.perception.detection.gdino import GroundingDINOcar = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.img = car.getImage("front_center", "rgb").datamodel = GroundingDINO(use_local = False)box, scores, labels = model.run(rgbimage=img, prompt=<prompt>)print(box, scores, labels)## if you want to use the model locally, set use_local=Truemodel = GroundingDINO(use_local = True)box, scores, labels = model.run(rgbimage=img, prompt=<prompt>)print(box, labels)
Assistant
Responses are generated using AI and may contain mistakes.