Part 1: Using AI Models with Cars
Notebook for this part can be found: Here
Initialize the Car and Import Dependencies
AirGenCar class provides an interface to control the vehicle, while the data_collector decorator will help us capture sensor data during movement. WeatherParameter and Vector3r allow us to customize the environment and represent 3D positions.
Import AI Models and Configure the Environment
Here we prepare our AI models and customize the environment:
GroundingDINO is a text-guided object detection model combining DINO (Detection Transformer) architecture with grounding capabilities. It can identify objects based on text prompts, making it versatile for autonomous driving perception.
GSAM2 (Grounded Segment Anything Model 2) creates pixel-level masks for objects described by text prompts. We’ll use it to identify road surfaces, helping our vehicle understand drivable areas.
We also configure challenging environmental conditions (fog and sunset lighting) to test how our perception systems perform under difficult visual circumstances.
Generate a Path for the Car to Follow
The simPlanPathToRandomFreePoint function generates a collision-free path to a random destination within 50 meters. It creates waypoints forming a smooth trajectory and visualizes it in the simulation. We convert these points to the required Vector3r format for AirGen’s movement functions, preparing the path for our car to follow.
Define a Function to Run the AI Models
Collect Data While the Car Follows the Path
@data_collector decorator transforms our movement function into a data collection pipeline that calls runAIModels every 0.1 seconds. The car follows our generated path at 5 m/s while continuously collecting perception data. This approach demonstrates how to integrate movement control with perception in autonomous systems.
Part 2: Using AI Models with Drones
Notebook for this part can be found: Here
Initialize the Drone
Take Off and Position the Drone
Import AI Models for Object Detection
- Text-guided detection capabilities that adapt to different target objects
- Zero-shot capabilities that work even for objects not seen during training
- Transformer architecture that maintains accuracy with small objects in wide views
Define a Function for Fire Detection
Search for Fire by Rotating the Drone
Add Smoke Segmentation


Evaluate Vision Models Under Various Weather Conditions
This test evaluates how our perception models perform as visibility deteriorates. The code incrementally increases rain and fog from 0% to 90%, running both detection and segmentation at each step. This systematic approach helps identify how environmental conditions affect perception performance, allowing developers to establish confidence thresholds and adaptive algorithms for real-world deployment.