from grid.model.perception.depth.depth_anything import DepthAnything
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = DepthAnything(use_local = True)
result = model.run(rgbimage=img)
print(result.shape)

The DepthAnything class implements a wrapper for the DepthAnything model, which estimates depth maps from RGB images. The model configurations are defined based on different encoder types.

class DepthAnything()
use_local
boolean
default:"True"

If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.

This model is currently not available via Cortex.

def run()
rgbimage
np.ndarray
required

The input RGB image of shape (M,N,3)(M,N,3).

Returns
np.ndarray

The predicted output of shape (M,N)(M,N).

from grid.model.perception.depth.depth_anything import DepthAnything
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = DepthAnything(use_local = True)
result = model.run(rgbimage=img)
print(result.shape)

This code is licensed under the Apache 2.0 License.