from grid.model.perception.segmentation.sam2 import SAM2car = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.img = car.getImage("front_center", "rgb").dataprompts = np.array([[616, 208]])labels = np.array([1])model = SAM2(use_local = False)result = model.run(rgbimage=img, prompts, labels)print(result.shape)
The SAM2 class implements a wrapper for the Segment Anything 2.1 (SAM2) model, which segments objects in RGB images
and videos based on input prompts.
from grid.model.perception.segmentation.sam2 import SAM2car = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.img = car.getImage("front_center", "rgb").dataprompts = np.array([[616, 208]])labels = np.array([1])model = SAM2(use_local = False)result = model.run(rgbimage=img, prompts, labels)print(result.shape)
This code is licensed under the Apache 2.0, and BSD-3 License.
from grid.model.perception.segmentation.sam2 import SAM2car = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.img = car.getImage("front_center", "rgb").dataprompts = np.array([[616, 208]])labels = np.array([1])model = SAM2(use_local = False)result = model.run(rgbimage=img, prompts, labels)print(result.shape)
Assistant
Responses are generated using AI and may contain mistakes.