Skip to main content
The grid-cortex-client Python package provides a unified interface to all AI models hosted on GRID Cortex.

Installation

Install the GRID Cortex client package using pip:
pip install grid-cortex-client
It is recommended that you use Python 3.10+

Authentication & Endpoint

Set up your API key (and endpoint, if needed):
  1. During onboarding General Robotics will give you a personal CORTEX API key.
  2. Export it so the client can pick it up automatically:
    export GRID_CORTEX_API_KEY="<YOUR_KEY>"
    
  3. If you run Cortex on-prem or on a managed cloud deployment, point the client at your instance:
    export GRID_CORTEX_BASE_URL="https://<custom_IP>/cortex"
    
  4. You can also pass the key directly when constructing the client:
    from grid_cortex_client import CortexClient
    client = CortexClient(api_key="<YOUR_KEY>")
    

Quick Start (2 lines)

Get started with just two lines of code:
from grid_cortex_client import CortexClient
result = CortexClient().run(model_id="zoedepth", image_input="demo.jpg")
The result type depends on the model — see the individual model pages for details.

Unified API

Every model is called through the same function:
CortexClient.run(model_id: str, **kwargs) -> Any
  • model_id — the model identifier (e.g. "zoedepth", "gsam2", "pi05")
  • **kwargs — model-specific inputs (image_input, prompt, left_image, etc.)
from grid_cortex_client import CortexClient

client = CortexClient()
depth = client.run(model_id="zoedepth", image_input="scene.jpg")
mask  = client.run(model_id="gsam2", image_input="scene.jpg", prompt="a robot arm")

Discovery

List all available models and get per-model documentation at runtime:
from grid_cortex_client import CortexClient

client = CortexClient()

# List all deployed model IDs
print(client.available_models())
# ['da3metric', 'zoedepth', 'owlv2', 'gsam2', 'sam3', ...]

# Get detailed documentation for a specific model
print(client.help("sam3"))
You can also use the ModelType enum for IDE autocompletion:
from grid_cortex_client import CortexClient, ModelType

client = CortexClient()
depth = client.run(ModelType.ZOEDEPTH, image_input="demo.jpg")

Async & WebSocket Clients

AsyncCortexClient

For asyncio-based applications, use AsyncCortexClient — same API as CortexClient but with await:
from grid_cortex_client import AsyncCortexClient

async def main():
    client = AsyncCortexClient()
    depth = await client.run(model_id="zoedepth", image_input="demo.jpg")
    print(depth.shape)
    await client.close()

CortexHubClient (WebSocket)

For high-throughput pipelines, CortexHubClient provides a persistent WebSocket connection. It supports two patterns: Request-response (simple)
from grid_cortex_client import CortexHubClient, ModelType

async with CortexHubClient() as hub:
    depth = await hub.run(ModelType.ZOEDEPTH, image_input=img)
Pub/Sub (concurrent, non-blocking)
from grid_cortex_client import CortexHubClient, ModelType

async with CortexHubClient() as hub:
    await hub.publish(ModelType.ZOEDEPTH, request_id="depth1", image_input=img)
    await hub.publish(ModelType.GSAM2, request_id="mask1", image_input=img, prompt="cat")

    async for result in hub.subscribe():
        print(f"{result.request_id}: ok={result.ok}")
        process(result.data)
Each HubResult contains request_id, model, data, ok, and optional error.

Troubleshooting

401 Unauthorized – Check that your shell actually has GRID_CORTEX_API_KEY exported and that the key is correct. Timeout / connection errors – If you are on-prem/managed cloud, confirm GRID_CORTEX_BASE_URL points to your instance. You can also adjust the default 30 s timeout:
client = CortexClient(timeout=60)