from grid.model.perception.segmentation.oneformer import OneFormer
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = OneFormer(use_local = False)
result = model.run(rgbimage=img, mode="semantic")
print(result.shape)
The OneFormer class provides core functionality for this module.
class OneFormer()
use_local
boolean
default:"False"
If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.
def run()
rgbimage
np.ndarray
required
The input RGB image of shape (M,N,3)(M, N, 3).
mode
str
default:"semantic"
The mode of segmentation. Can be either “semantic” or “panoptic”. Defaults to “semantic”.
Returns
np.ndarray
The predicted segmentation mask of shape (M,N)(M, N).
from grid.model.perception.segmentation.oneformer import OneFormer
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = OneFormer(use_local = False)
result = model.run(rgbimage=img, mode="semantic")
print(result.shape)
This code is licensed under the MIT License.