The UniMatch
implements a wrapper for the UniMatch model, which estimates optical flow in videos using a multi-scale transformer-based approach.
If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.
This model is currently not available via Cortex.
The link to the video or the path to the video/image directory.
The mode of input, either ‘video’ or ‘image’.
The optical flow maps for the input video or images.
from grid.model.perception.optical_flow.gmflow import UniMatch
car = AirGenCar()
# We will be capturing an image from the AirGen simulator
# and run model inference on it.
video_input = "https://huggingface.co/datasets/pranay-ar/test/resolve/main/all_ego.mp4"
model = UniMatch(use_local = False)
result = model.run(video_input, mode='video')
This code is licensed under the Apache 2.0 License.