Thanks to visit codestin.com
Credit goes to github.com

Skip to content

LeIsaac provides teleoperation functionality in IsaacLab using the SO101Leader (LeRobot), including data collection, data conversion, and subsequent policy training.

License

Notifications You must be signed in to change notification settings

hmk55555/leisaac

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

51 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

LeIsaac ๐Ÿš€

leisaac.mp4

This repository provides teleoperation functionality in IsaacLab using the SO101Leader (LeRobot), including data collection, data conversion, and subsequent policy training.

  • ๐Ÿค– We use SO101Follower as the robot in IsaacLab and provide relevant teleoperation method.
  • ๐Ÿ”„ We offer scripts to convert data from HDF5 format to the LeRobot Dataset.
  • ๐Ÿง  We utilize simulation-collected data to fine-tune GR00T N1.5 and deploy it on real hardware.

Tip

Welcome to the Lightwheel open-source community!

Join us, contribute, and help shape the future of AI and robotics. For questions or collaboration, contact Zeyu or Yinghao.

Prerequisites & Installation ๐Ÿ› ๏ธ

1. Environment Setup

First, follow the IsaacLab official installation guide to install IsaacLab. We recommend using Conda for easier environment management. In summary, you only need to run the following command.

# Create and activate environment
conda create -n leisaac python=3.10
conda activate leisaac

# Install cuda-toolkit
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit

# Install PyTorch
pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu118

# Install IsaacSim
pip install --upgrade pip
pip install 'isaacsim[all,extscache]==4.5.0' --extra-index-url https://pypi.nvidia.com

# Install IsaacLab
git clone [email protected]:isaac-sim/IsaacLab.git
sudo apt install cmake build-essential

cd IsaacLab
# fix isaaclab version for isaacsim4.5
git checkout v2.1.1
./isaaclab.sh --install

Tip

If you are using 50 series GPU, we recommend to use isaacsim5.0 and isaaclab with v2.2.0. We have tested it on isaacsim5.0 and it works properly. Below are some important dependency versions:

Dependency Version
Python 3.11
CUDA 12.8
PyTorch 2.7.0
IsaacSim 5.0
IsaacLab 2.2.0

2. Clone This Repository and Install

Clone this repository and install it as dependency.

git clone https://github.com/LightwheelAI/leisaac.git
cd leisaac
pip install -e source/leisaac

Asset Preparation ๐Ÿ 

We provide an example USD assetโ€”a kitchen scene. Please download related scene here and extract it into the assets directory. The directory structure should look like this:

<assets>
โ”œโ”€โ”€ robots/
โ”‚   โ””โ”€โ”€ so101_follower.usd
โ””โ”€โ”€ scenes/
    โ””โ”€โ”€ kitchen_with_orange/
        โ”œโ”€โ”€ scene.usd
        โ”œโ”€โ”€ assets
        โ””โ”€โ”€ objects/
            โ”œโ”€โ”€ Orange001
            โ”œโ”€โ”€ Orange002
            โ”œโ”€โ”€ Orange003
            โ””โ”€โ”€ Plate

Scene Assets Download Table

Scene Name Description Download Link
Kitchen with Orange Example kitchen scene with oranges Download
Lightwheel Toyroom Modern room with many toys Download
Table with Cube Simple table with one cube Download
Lightwheel Bedroom Realistic bedroom scene with cloth Download

Tip

For more high-quality scene assets, please visit our official website or the Releases page.

Device Setup ๐ŸŽฎ

We use the SO101Leader as the teleoperation device. Please follow the official documentation for connection and configuration.

Teleoperation Usage ๐Ÿ•น๏ธ

You can run teleoperation tasks with the following script:

python scripts/environments/teleoperation/teleop_se3_agent.py \
    --task=LeIsaac-SO101-PickOrange-v0 \
    --teleop_device=so101leader \
    --port=/dev/ttyACM0 \
    --num_envs=1 \
    --device=cuda \
    --enable_cameras \
    --record \
    --dataset_file=./datasets/dataset.hdf5
Parameter descriptions for teleop_se3_agent.py

  • --task: Specify the task environment name to run, e.g., LeIsaac-SO101-PickOrange-v0.

  • --seed: Specify the seed for environment, e.g., 42.

  • --teleop_device: Specify the teleoperation device type, e.g., so101leader, bi-so101leader, keyboard.

  • --port: Specify the port of teleoperation device, e.g., /dev/ttyACM0. Only used when teleop_device is so101leader.

  • --left_arm_port: Specify the port of left arm, e.g., /dev/ttyACM0. Only used when teleop_device is bi-so101leader.

  • --right_arm_port: Specify the port of right arm, e.g., /dev/ttyACM1. Only used when teleop_device is bi-so101leader.

  • --num_envs: Set the number of parallel simulation environments, usually 1 for teleoperation.

  • --device: Specify the computation device, such as cpu or cuda (GPU).

  • --enable_cameras: Enable camera sensors to collect visual data during teleoperation.

  • --record: Enable data recording; saves teleoperation data to an HDF5 file.

  • --dataset_file: Path to save the recorded dataset, e.g., ./datasets/record_data.hdf5.

  • --resume: Enable resume data recording from the existing dataset file.

  • --task_type: Specify task type. If your dataset is recorded with keyboard, you should set it to keyboard, otherwise not to set it and keep default value None.

  • --quality: Whether to enable quality render mode.

If the calibration file does not exist at the specified cache path, or if you launch with --recalibrate, you will be prompted to calibrate the SO101Leader. Please refer to the documentation for calibration steps.

After entering the IsaacLab window, press the b key on your keyboard to start teleoperation. You can then use the specified teleop_device to control the robot in the simulation. If you need to reset the environment after completing your operation, simply press the r or n key. r means resetting the environment and marking the task as failed, while n means resetting the environment and marking the task as successful.

Troubleshooting:

If you encounter permission errors like ConnectionError, you may need to run:

sudo chmod 666 /dev/ttyACM0

# or just add your user in related groups
sudo usermod -aG dialout $USER

Dataset Replay ๐Ÿ“บ

After teleoperation, you can replay the collected dataset in the simulation environment using the following script:

python scripts/environments/teleoperation/replay.py \
    --task=LeIsaac-SO101-PickOrange-v0 \
    --num_envs=1 \
    --device=cuda \
    --enable_cameras \
    --replay_mode=action \
    --dataset_file=./datasets/dataset.hdf5 \
    --select_episodes 1 2
Parameter descriptions for replay.py

  • --task: Specify the task environment name to run, e.g., LeIsaac-SO101-PickOrange-v0.

  • --num_envs: Set the number of parallel simulation environments, usually 1 for replay.

  • --device: Specify the computation device, such as cpu or cuda (GPU).

  • --enable_cameras: Enable camera sensors to visualize when replay.

  • --replay_mode: Replay mode, we support replay action or state.

  • --task_type: Specify task type. If your dataset is recorded with keyboard, you should set it to keyboard, otherwise not to set it and keep default value None.

  • --dataset_file: Path to the recorded dataset, e.g., ./datasets/record_data.hdf5.

  • --select_episodes: A list of episode indices to replayed, Keep empty to replay all episodes.

Data Convention & Conversion ๐Ÿ“Š

Collected teleoperation data is stored in HDF5 format in the specified directory. We provide a script to convert HDF5 data to the LeRobot Dataset format. Only successful episode will be converted.

Note

This script depends on the LeRobot runtime environment. We recommend using a separate Conda environment for LeRobotโ€”see the official LeRobot repo for installation instructions.

You can modify the parameters in the script and run the following command:

python scripts/convert/isaaclab2lerobot.py

Policy Training ๐Ÿ‹๏ธโ€โ™‚๏ธ

GR00T N1.5 provides a fine-tuning workflow based on the LeRobot Dataset. You can refer to nvidia/gr00t-n1.5-so101-tuning to fine-tune it with your collected lerobot data. We take pick-orange task as an example.

  • First, collect a pick-orange dataset in simulation.
  • Then, fine-tune GR00T N1.5 using this data.
  • Finally, deploy the trained policy on real hardware.

Policy Inference ๐Ÿงฉ

We also provide interfaces for running policy inference in simulation. You can start inference with the following script (take gr00tn1.5 as example):

python scripts/evaluation/policy_inference.py \
    --task=LeIsaac-SO101-PickOrange-v0 \
    --eval_rounds=10 \
    --policy_type=gr00tn1.5 \
    --policy_host=localhost \
    --policy_port=5555 \
    --policy_timeout_ms=5000 \
    --policy_action_horizon=16 \
    --policy_language_instruction="Pick up the orange and place it on the plate" \
    --device=cuda \
    --enable_cameras
Parameter descriptions for policy_inference.py

  • --task: Name of the task environment to run for inference (e.g., LeIsaac-SO101-PickOrange-v0).

  • --seed: Seed of environment (default: current time).

  • --episode_length_s: Episode length in seconds (default: 60).

  • --eval_rounts: Number of evaluation rounds. 0 means don't add time out termination, policy will run until success or manual reset (default: 0)

  • --policy_type: Type of policy to use (default: gr00tn1.5).

    • now we support gr00tn1.5, lerobot-<model_type>
  • --policy_host: Host address of the policy server (default: localhost).

  • --policy_port: Port of the policy server (default: 5555).

  • --policy_timeout_ms: Timeout for the policy server in milliseconds (default: 5000).

  • --policy_action_horizon: Number of actions to predict per inference (default: 16).

  • --policy_language_instruction: Language instruction for the policy (e.g., task description in natural language).

  • --policy_checkpoint_path: Path to the policy checkpoint (if required).

  • --device: Computation device, such as cpu or cuda.

You may also use additional arguments supported by IsaacLab's AppLauncher (see their documentation for details).

Depending on your use case, you may need to install additional dependencies to enable inference:

pip install -e "source/leisaac[gr00t]"
pip install -e "source/leisaac[lerobot-async]"
pip install -e "source/leisaac[openpi]"

Important

For service-based policies, you must start the corresponding service before running inference. For example, with GR00T, you need to launch the GR00T N1.5 inference server first. You can refer to the GR00T evaluation documentation for detailed instructions.

Below are the currently supported policy inference methods:

finetuned gr00t n1.5

For usage instructions, please refer to the examples provided above.

lerobot official policy

We utilize lerobot's async inference capabilities for policy execution. For detailed information, please refer to the official documentation. Prior to execution, ensure that the policy server is running.

python scripts/evaluation/policy_inference.py \
    --task=LeIsaac-SO101-PickOrange-v0 \
    --policy_type=lerobot-smolvla \
    --policy_host=localhost \
    --policy_port=8080 \
    --policy_timeout_ms=5000 \
    --policy_language_instruction='Pick the orange to the plate' \
    --policy_checkpoint_path=outputs/smolvla/leisaac-pick-orange/checkpoints/last/pretrained_model \
    --policy_action_horizon=50 \
    --device=cuda \
    --enable_cameras
finetuned openpi

We utilize openpi's remote inference capabilities for policy execution. For detailed information, please refer to the official documentation. Prior to execution, ensure that the policy server is running.

python scripts/evaluation/policy_inference.py \
    --task=LeIsaac-SO101-PickOrange-v0 \
    --policy_type=openpi \
    --policy_host=localhost \
    --policy_port=8000 \
    --policy_timeout_ms=5000 \
    --policy_language_instruction='Pick the orange to the plate' \
    --device=cuda \
    --enable_cameras

Extra Feature โœจ

We also provide some additional features. You can refer to the following instructions to try them out.

DigitalTwin Env: Make Sim2Real Simple

Inspired by the greenscreening functionality from SIMPLER and ManiSkill, we have implemented DigitalTwin Env. This feature allows you to replace the background in simulation environments with real background images while preserving the foreground elements such as robotic arms and interactive objects. This approach significantly reduces the gap between simulation and reality, enabling better sim2real transfer.

To use this feature, simply create a task configuration class that inherits from ManagerBasedRLDigitalTwinEnvCfg and launch it through the corresponding environment. In the configuration class, you can specify relevant parameters including overlay_mode, background images path, and foreground environment components to preserve.

For usage examples, please refer to the sample task: LiftCubeDigitalTwinEnvCfg.

MimicGen Env: Generate Data From Demonstrations

We have integrated IsaacLab MimicGen, a powerful feature that automatically generates additional demonstrations from expert demonstrations.

To use this functionality, you first need to record some demonstrations. Recording scripts can be referenced from the instructions above. (Below we use the MimicGen for the LeIsaac-SO101-LiftCube-v0 task as an example).

[NOTE] Pay attention to the input_file and output_file parameters in the following scripts. Typically, the output_file from the previous script becomes the input_file for the next script.

Since MimicGen requires trajectory generalization based on end-effector pose and object pose, we first convert joint-position-based action data to IK-based action data. The conversion process is as follows, where input_file specifies the collected demonstration data:

python scripts/mimic/eef_action_process.py \
    --input_file ./datasets/mimic-lift-cube-example.hdf5 \
    --output_file ./datasets/processed_mimic-lift-cube-example.hdf5 \
    --to_ik --headless

Next, we perform sub-task annotation based on the converted action data. Annotation can be done in two ways: automatic and manual. If you want to use automatic annotation, please add the --auto startup option.

python scripts/mimic/annotate_demos.py \
    --device cuda \
    --task LeIsaac-SO101-LiftCube-Mimic-v0 \
    --input_file ./datasets/processed_mimic-lift-cube-example.hdf5 \
    --output_file ./datasets/annotated_mimic-lift-cube-example.hdf5 \
    --enable_cameras

After annotation is complete, we can proceed with data generation. The generation process is as follows:

python scripts/mimic/generate_dataset.py \
    --device cuda \
    --num_envs 1 \
    --generation_num_trials 10 \
    --input_file ./datasets/annotated_mimic-lift-cube-example.hdf5 \
    --output_file ./datasets/generated_mimic-lift-cube-example.hdf5 \
    --enable_cameras

After obtaining the generated data, we also provide a conversion script to transform it from IK-based action data back to joint-position-based action data, as follows:

python scripts/mimic/eef_action_process.py \
    --input_file ./datasets/generated_mimic-lift-cube-example.hdf5 \
    --output_file ./datasets/final_generated_mimic-lift-cube-example.hdf5 \
    --to_joint --headless

Finally, you can use replay to view the effects of the generated data. It's worth noting that due to the inherent randomness in IsaacLab simulation, the replay performance may vary.

[NOTE] Depending on the device used to collect the data, you need to specify the corresponding task type with --task_type. For example, if your demonstrations were collected using the keyboard, add --task_type=keyboard when running annotate_demos and generate_dataset.

task_type dose not need to be provided during replay the results final_generated_dataset.hdf5.

Acknowledgements ๐Ÿ™

We gratefully acknowledge IsaacLab and LeRobot for their excellent work, from which we have borrowed some code.

Join Our Team! ๐Ÿ’ผ

We're always looking for talented individuals passionate about AI and robotics! If you're interested in:

  • ๐Ÿค– Robotics Engineering: Working with cutting-edge robotic systems and teleoperation
  • ๐Ÿง  AI/ML Research: Developing next-generation AI models for robotics
  • ๐Ÿ’ป Software Engineering: Building robust, scalable robotics software
  • ๐Ÿ”ฌ Research & Development: Pushing the boundaries of what's possible in robotics

Join us at Lightwheel AI! We offer:

  • Competitive compensation and benefits
  • Work with state-of-the-art robotics technology
  • Collaborative, innovative environment
  • Opportunity to shape the future of AI-powered robotics

Apply Now โ†’ | Contact Now โ†’ | Learn More About Us โ†’


Let's build the future of robotics together! ๐Ÿค


About

LeIsaac provides teleoperation functionality in IsaacLab using the SO101Leader (LeRobot), including data collection, data conversion, and subsequent policy training.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%