GRID supports the training and evaluation of reinforcement learning agents in Isaac Sim for the supported quadruped, bipeds, arms, and humanoid robots.

Training

GRID supports training reinforcement learning agents using the RSL-RL training methodology. Agents can be trained by modifying the agent_cfg.yaml file as follows:

- rsl_rl: 
    train: true 
    video: false
    video_interval: 2000
    video_length: 200
    resume: false
    seed: 0
    max_iterations: 1000
    run_name: go2_rough_train_rlagent
    experiment_name: go2_rough
    load_run: .*
    load_checkpoint: model_.*.pth
    max_episode_length: 100
    logger: tensorboard

The training environment name specifying the task along with the number of parallel agents also need to be specified in the custom_cfg.yaml

num_envs: 100
task: Isaac-Velocity-Rough-Unitree-Go2-v0

To run the RL training headless, use the following configuration in custom_cfg.yaml.

headless: true
livestream: 0

The video parameter in the agent_cfg.yaml can be used to save inference videos of the policy during training.

The checkpoints are saved in the same directory as the cfg files with the appropriate date and time-stamp.