from grid.model.perception.depth.sapiens_depth import SapiensDepth
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = SapiensDepth(use_local = False)
result = model.run(rgbimage=img)
print(result.shape)

The SapiensDepth class provides a wrapper for the SapiensDepth model, which estimates depth maps from RGB images.

This model is specifically trained for images with humans as the primary subject.
class SapiensDepth()
use_local
boolean
default:"False"

If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.

This model is currently not available via Cortex.

def run()
rgbimage
np.ndarray
required

The input RGB image of shape (M,N,3)(M,N,3).

Returns
np.ndarray

The predicted depth map of shape (M,N)(M,N).

from grid.model.perception.depth.sapiens_depth import SapiensDepth
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = SapiensDepth(use_local = False)
result = model.run(rgbimage=img)
print(result.shape)

This code is licensed under the CC-by-NC 4.0 License.