The GroundedSAM
class provides a wrapper for the GroundedSAM model, which segments objects in RGB images based on text prompts.
If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.
This model is currently not available via Cortex.
The input RGB image of shape (M,N,3). The text prompt to use for segmentation.
The predicted segmentation mask of shape (M,N).
from grid.model.perception.segmentation.gsam import GroundedSAM
car = AirGenCar()
# We will be capturing an image from the AirGen simulator
# and run model inference on it.
img = car.getImage("front_center", "rgb").data
model = GroundedSAM(use_local = False)
result = model.run(rgbimage=img, prompt=<prompt>)
print(result.shape)
This code is licensed under the Apache 2.0 License.