The PointPillars implements a wrapper for the PointPillars model, which detects objects in point cloud data and optionally projects them onto images based on calibration.
The PointPillars model requires a session with LiDAR data collection capabilities. You can find the sensor configuration options when launching a GRID session.
If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to True.
Expects a list of point cloud data. Each point cloud data is a list of 3D coordinates ([x0,y0,z0,x1,y1,z1...]), len(point_cloud) = 3 ∗ NumberOfPoints. The calibration matrix of shape (3,4).
from grid.model.perception.detection.pointpillars import PointPillars
car = AirGenCar()
# We will be capturing an image from the AirGen simulator
# and run model inference on it.
img = car.getImage("front_center", "rgb").data
model = PointPillars(use_local = False)
result = model.run(rgbimage=img)
print(result.shape)
This code is licensed under the MIT License.