Skip to main content
The PointPillars implements a wrapper for the PointPillars model, which detects objects in point cloud data and optionally projects them onto images based on calibration.
The PointPillars model requires a session with LiDAR data collection capabilities. You can find the sensor configuration options when launching a GRID session.
class PointPillars()
use_local
boolean
default:true
If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to True.
def run()
pc
np.ndarray
required
Expects a list of point cloud data. Each point cloud data is a list of 3D coordinates ([x0,y0,z0,x1,y1,z1...][x0,y0,z0,x1,y1,z1...]), len(point_cloud) = 3 * NumberOfPoints.
calibration
np.ndarray
required
The calibration matrix of shape (3,4)(3, 4).
Returns
np.ndarray
The predicted output.
from grid.model.perception.detection.pointpillars import PointPillars
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = PointPillars(use_local = False)
result = model.run(rgbimage=img)
print(result.shape)
This code is licensed under the MIT License.