Thanks to visit codestin.com
Credit goes to github.com

Skip to content

jplin123/legged_lab

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

122 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– Legged Lab

IsaacSim Isaac Lab Python Linux platform Windows platform pre-commit License

Table of Contents

πŸ“– Overview

This repository is an extension for legged robot reinforcement learning based on Isaac Lab, which allows to develop in an isolated environment, outside of the core Isaac Lab repository. The RL algorithm is based on a forked RSL-RL library.

Key Features:

  • DeepMimic for humanoid robots, including Unitree G1.
  • AMP Adversarial Motion Priors (AMP) for humanoid robots, including Unitree G1. We suggest retargeting the human motion data by GMR.

Demo

  • Adversarial Motion Priors for Unitree G1:
rl-video-step-0.mp4

πŸ”₯ News & Updates

  • 2026/02/09: Add Docker Compose usage guide (official Isaac Lab image workflow), including host path requirement for local rsl_rl.
  • 2025/12/16: Test in Isaac Lab 2.3.1 and RSL-RL 3.2.0.
  • 2025/12/05: Use git lfs to store large files, including motion data and robot models.
  • 2025/11/23: Add Symmetry data augmentation in AMP training.
  • 2025/11/22: New implementation of AMP.
  • 2025/11/19: Add DeepMimic for G1.
  • 2025/10/14: Update to support rsl_rl v3.1.1. Only walking in flat terrain is supported now.
  • 2025/08/24: Support using more steps observations and motion data in AMP training.
  • 2025/08/22: Compatible with Isaac Lab 2.2.0.
  • 2025/08/21: Add support for retargeting human motion data by GMR.

βš™οΈ Installation

Prerequisites

  • Isaac Lab: Ensure you have installed Isaac Lab v2.3.1. Follow the official guide.
  • Git LFS: Required for downloading large model files.

Setup Steps

  1. Clone the Repository Clone this repository outside your existing IsaacLab directory to maintain isolation.

    # Option 1: HTTPS
    git clone https://github.com/zitongbai/legged_lab
    
    # Option 2: SSH
    git clone [email protected]:zitongbai/legged_lab.git
    
    cd legged_lab
  2. Pull Git LFS Assets Install and initialize git-lfs on your machine (one-time), then pull large assets (USD models and motion data) for this repository.

    git lfs install
    git lfs pull
  3. Install the Package Use the Python interpreter associated with your Isaac Lab installation.

    python -m pip install -e source/legged_lab
  4. Install RSL-RL (Forked Version) We use a customized version of rsl_rl to support advanced features like AMP.

    # Clone outside of IsaacLab and legged_lab directories
    git clone -b feature/amp https://github.com/zitongbai/rsl_rl.git
    
    cd rsl_rl
    python -m pip install -e .

Docker Usage (Isaac Lab Image)

If you use the provided Docker Compose workflow, the container will mount local source code and install packages automatically at startup.

Host directory requirement for rsl_rl

By default, docker/docker-compose.yaml expects rsl_rl to be placed next to legged_lab:

.../lab_dev/
β”œβ”€β”€ legged_lab/
└── rsl_rl/

If your rsl_rl is somewhere else, update RSL_RL_PATH in docker/.env.base.

Start container

docker compose -f docker/docker-compose.yaml up -d

At startup, the container will:

  • install mounted rsl_rl in editable mode (/workspace/rsl_rl)
  • install mounted legged_lab in editable mode (/workspace/legged_lab/source/legged_lab)

Enter container

docker compose -f docker/docker-compose.yaml exec legged-lab bash

Default working directory is /workspace/legged_lab.

Stop / remove container

docker compose -f docker/docker-compose.yaml stop
docker compose -f docker/docker-compose.yaml down

Recreate container after compose changes

docker compose -f docker/docker-compose.yaml down
docker compose -f docker/docker-compose.yaml up -d

πŸš€ Usage

1. Prepare Motion Data

We have already provided some off-the-shelf motion data in the source/legged_lab/legged_lab/data/MotionData folder for testing.

If you want to add more motion data, you can do so by following the steps below.

  1. Retarget human motion data to the robot model. We recommend using GMR for retargeting human motion data.

  2. Put the retargeted motion data in the temp/gmr_data folder.

  3. Use a helper script to convert the motion data to the required format:

    python scripts/tools/retarget/dataset_retarget.py \
        --robot g1 \
        --input_dir temp/gmr_data/ \
        --output_dir temp/lab_data/ \
        --config_file scripts/tools/retarget/config/g1_29dof.yaml \
        --loop clamp
  4. Move the converted data from temp/lab_data to source/legged_lab/legged_lab/data/MotionData, and set the MotionDataCfg in the config file, e.g., source/legged_lab/legged_lab/tasks/locomotion/amp/config/g1/g1_amp_env_cfg.py.

Please refer to the comments in the script for more details about the arguments, and refer to scripts/tools/retarget/gmr_to_lab.py for the data format used in this repository.

2. Training & Play

🎭 DeepMimic

Train

To train the DeepMimic algorithm, you can run the following command:

python scripts/rsl_rl/train.py --task LeggedLab-Isaac--Deepmimic-G1-v0 --headless --max_iterations 50000

The max_iterations can be adjusted based on your needs. For more details about the arguments, run python scripts/rsl_rl/train.py -h.

Play

You can play the trained model in a headless mode and record the video:

# replace the checkpoint path with the path to your trained model
python scripts/rsl_rl/play.py --task LeggedLab-Isaac-Deepmimic-G1-v0 --headless --num_envs 64 --video --checkpoint logs/rsl_rl/experiment_name/run_name/model_xxx.pt

πŸƒ Adversarial Motion Priors (AMP)

Train

To train the AMP algorithm, you can run the following command:

python scripts/rsl_rl/train.py --task LeggedLab-Isaac-AMP-G1-v0 --headless --max_iterations 50000
# Agibot X2
python scripts/rsl_rl/train.py --task LeggedLab-Isaac-AMP-X2-v0 --headless --max_iterations 50000 --resume --load_run "2025-10-22_14-46-38"

If you want to train it in a non-default gpu, you can pass more arguments to the command:

# replace `x` with the gpu id you want to use
python scripts/rsl_rl/train.py --task LeggedLab-Isaac-AMP-G1-v0 --headless --max_iterations 50000 --device cuda:x agent.device=cuda:x
# Agibot X2
python scripts/rsl_rl/train.py --task LeggedLab-Isaac-AMP-X2-v0 --headless --max_iterations 50000 --device cuda:x agent.device=cuda:x

For more details about the arguments, run python scripts/rsl_rl/train.py -h.

Play

You can play the trained model in a headless mode and record the video:

# replace the checkpoint path with the path to your trained model
python scripts/rsl_rl/play.py --task LeggedLab-Isaac-AMP-G1-v0 --headless --num_envs 64 --video --checkpoint logs/rsl_rl/experiment_name/run_name/model_xxx.pt
# Agibot X2
export __NV_PRIME_RENDER_OFFLOAD=1
export __GLX_VENDOR_LIBRARY_NAME=nvidia
python scripts/rsl_rl/play.py --task LeggedLab-Isaac-AMP-X2-Play-v0 --headless --num_envs 64 --video --checkpoint logs/rsl_rl/experiment_name/run_name/model_xxx.pt

The video will be saved in the logs/rsl_rl/experiment_name/run_name/videos/play directory.

πŸ—ΊοΈ Roadmap

  • Add more legged robots, such as Unitree H1
  • Self-contact penalty in AMP
  • Asymmetric Actor-Critic in AMP
  • Symmetric Reward
  • Sim2sim in mujoco
  • Add support for image observations
  • Walk in rough terrain with AMP

πŸ™ Acknowledgement

We would like to express our gratitude to the following open-source projects:

  • Isaac Lab - The foundation of this project.
  • RSL-RL - Reinforcement learning algorithms for legged robots.
  • AMP_for_hardware - Inspiration for AMP implementation.
  • GMR - Excellent motion retargeting library.
  • MimicKit - Reference for imitation learning.

About

Isaac Lab extension for legged robots.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 100.0%