We welcome contributions from all RoboTUM Humanoid Project members!
Before you start working on a new feature, please read our Contribution Guidelines.
The following section provides a detailed guide on how to quickly get started with the project.
Before installing Isaac Sim and Isaac Lab, please make sure your system meets the following requirements.
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 8-core (Intel i7 / AMD Ryzen 7) | 16-core or higher (Intel i9 / Ryzen 9) |
| RAM | 16 GB | 64 GB or more |
| GPU | NVIDIA RTX 3060 (12 GB VRAM) | NVIDIA RTX 4090 / 5080 (16–24 GB+ VRAM) |
| Storage | 50 GB free space | 100 GB+ SSD/NVMe |
| OS | Ubuntu 24.04 LTS | Ubuntu 24.04 LTS |
⚠️ Notes:
- Isaac Sim officially supports Linux + NVIDIA GPU + proprietary driver.
- Running large-scale RL (
--num_envs > 2048) requires a high-end GPU and sufficient RAM.- If you encounter
Exit Code 137 (SIGKILL), it usually indicates insufficient memory.- Windows and AMD GPUs are not recommended for full Isaac Lab compatibility.
| Software | Version | Check Command |
|---|---|---|
| Python | 3.10–3.11 | python --version |
| CUDA Toolkit | ≥ 12.0 | nvcc --version |
| NVIDIA Driver | ≥ 535.xx | nvidia-smi |
| Conda / Mamba | Latest | conda --version |
| Git | ≥ 2.30 | git --version |
| Component | Specification |
|---|---|
| CPU | AMD Ryzen (16 cores) |
| RAM | 64 GB |
| GPU | NVIDIA GeForce RTX 5080 (16 GB VRAM) |
| CUDA Toolkit | 12.0 |
| NVIDIA Driver | 570.195.03 |
| Python | 3.11.13 |
| OS | Ubuntu 24.04 LTS |
| Status | ✅ Fully compatible and tested for both Isaac Sim and Isaac Lab RL training |
You can download them from the official NVIDIA link below:
https://isaac-sim.github.io/IsaacLab/main/source/setup/installation/pip_installation.html
It also includes instructions on how to create a conda virtual environment.
Step 1: Clone the IsaacNext workspace
# You can clone it anywhere on your machine
git clone https://github.com/The-RoboTUM/IsaacNext.gitThis repository is your main working environment.
Step 2: Clone the URDF source repository
# You can clone it anywhere on your machine
git clone https://github.com/The-RoboTUM/urdfheim.gitInside that repo, you will find Forrest’s URDF file at:
urdfheim/complex/Forrest_URDF_description/urdf/In Isaacsim, we need to USD file, so we need to convert URDF file to USD file.
Nest, let's go through the operation steps in detail:
- Launch Isaacsim.
cd path/to/IsaacNext
isaacsim- Use the URDF Importer to import the Forrest URDF file, but before importing, please change the following settings:
- Choose
Moveable Base - Create
Collisions From Visuals - Choose
Convex Decomposition
- Choose
- after you finish doing above steps, Isaacsim will generate a folder:
Forrest_URDF, at the same place of the urdf file.
Go back to your IsaacNext repo:
cd path/to/IsaacNext
mkdir -p symlinksThe symlinks folder will store shortcuts (symbolic links) to external robot asset directories — this way, you don’t need to move or copy large generated files.
Create a Symbolic Link to Forrest’s Generated USD Folder Now link the generated Forrest_URDF folder into IsaacNext/symlinks/.
cd IsaacNext/symlinks
ln -s /absolute/path/to/urdfheim/complex/Forrest_URDF_description/urdf/Forrest_URDF Forrest_URDF
Your structure should now look like:
IsaacNext/
├── symlinks/
│ └── Forrest_URDF -> /absolute/path/to/.../Forrest_URDF-
In Isaacsim, open
/IsaacNext/symlinks/Forrest_URDF/Forrest_URDF.usd, pay attention to the suffix of this file, this is a.usdfile. -
After you open the
Forrest_URDF.usdfile, then on the right-hand Stage panel in IsaacSim, find the joints folder:
-
For anchor joints set max force to 1000, stiffness to 1000 and damping to 10
-
For passive pantograph joints (r3b, r4f, r4b, l3b, l4f, l4b) set the stiffness to 0
-
Set the offsets of all the joints to the list bellow (the Joints limits part)
-
Use stage search for 'collisions' and for all disable instantiable flag
-
Use stage search for 'mesh', for the active items, look all the bodies of the pantograph and the inner gears of the hip and disable collisions for them
-
Testing set a ground and simulate, check collisions
-
Go to joints and verify that all joints can be actuated
l0_acetabulofemoral_roll= [-180, 180]l1_acetabulofemoral_lateral= [-10, 10]l2_pseudo_acetabulofemoral_flexion= [-120, 120]l3b_femorotibial_back= [-180, 180]l4b_intertarsal_back= [-180, 35]l3f_femorotibial_front= [-76, 18]l4f_intertarsal_front= [-180, 180]Virtual_S23_Assyv18_mirror_1_anchor= [-0.0573, 0.0573]l5_metatarsophalangeal= [-20, 40]l6_interphalangeal= [-100, 90]l4p_intertarsal_pulley= [-180, 180]l2p_acetabulofemoral_pulley= [-180, 180]l2b_acetabulofemoral_flexion= [-180, 180]l2f_acetabulofemoral_flexion= [-180, 180]r0_acetabulofemoral_roll= [-180, 180]r1_acetabulofemoral_lateral= [-10, 10]r2_pseudo_acetabulofemoral_flexion= [-120, 120]r3b_femorotibial_back= [-180, 180]r4b_intertarsal_back= [-180, 180]r3f_femorotibial_front= [-76, 18]r4f_intertarsal_front= [-180, 180]r4p_intertarsal_pulley= [-180, 180]Virtual_S23_Assyv18_1_anchor= [-0.0573, 0.0573]r5_metatarsophalangeal= [-20, 40]r6_interphalangeal= [-100, 90]r2p_acetabulofemoral_pulley= [-180, 180]r2b_acetabulofemoral_flexion= [-180, 180]r2f_acetabulofemoral_flexion= [-180, 180]
After you have set the stiffness and motion range for each joint of the Forrest's USD file, the next step is to configure whether collisions are enabled for each joint. This ensures that the USD model more closely resembles the real physical world.
First, go to the Stage panel on the right side, locate the corresponding link, and click the ➕, Then, find Collision, open it, and locate the corresponding item. Click the ➕, then select mesh. In the Property panel below, go to Extra Properties, find Collision Enabled, and following the instructions below, choose whether to activate or deactivate it.
There is an example that illustrates the operation steps more vividly:
-
base_link: -
Differential_Cage_Assyv7_mirror_1:CollisionDifferential_Cage_Assyv7_mirror_1mesh: Collision Enabled ✅️
Pulley_Linkage_10mm_Bearingv1_mirror_10:mesh: Collision Disabled ❌️
Pulley_Linkage_10mm_Bearingv1_mirror_3:mesh: Collision Disabled ❌️
-
Differential_Cube_Assy_V2v4_mirror_1:CollisionDifferential_Cube_Assy_V2v4_mirror_1mesh: Collision Disabled ❌️
Outside_Hip_V2_Assyv28_mirror_1mesh: Collision Disabled ❌️
-
Knee_Assyv9_mirror_1:CollisionKnee_Assyv9_mirror_1mesh: Collision Enabled ✅️
-
S12p_Pantograph_Spring_Assy_Topv2_mirror_1:CollisionS12p_Pantograph_Spring_Assy_Topv2_mirror_1mesh: Collision Disabled ❌️
S12p_Pantograph_Spring_Assy_Botv1_mirror_1mesh: Collision Disabled ❌️
-
S23_Assyv18_mirror_1_virtual:CollisionS23_Assyv18_mirror_1mesh: Collision Disabled ❌️
-
S12_Front_Assyv6_mirror_1:CollisionS12_Front_Assyv6_mirror_1mesh: Collision Disabled ❌️
-
S23_Assyv18_mirror_1:CollisionS23_Assyv18_mirror_1mesh: Collision Enabled ✅️
-
S34_Foot_Connector_Assy_mirror_1:CollisionS34_Foot_Connector_Assy_mirror_1mesh: Collision Enabled ✅️
-
S45_Digit_Assyv2_mirror_1:CollisionS45_Digit_Assyv2_mirror_1mesh: Collision Enabled ✅️
-
Main_GST_Pully_Assyv4_mirror_1:CollisionMain_GST_Pully_Assyv4_mirror_1mesh: Collision Disabled ❌️
-
Inner_Gear_Assy_V2v13_mirror_1:CollisionInner_Gear_Assy_V2v13_mirror_1mesh: Collision Disabled ❌️
-
Cable_Gear_Motor_V2v8_mirror_1:CollisionCable_Gear_Motor_V2v8_mirror_1mesh: Collision Disabled ❌️
-
Cable_Gear_Motor_V2v8_mirror_2:CollisionCable_Gear_Motor_V2v8_mirror_2mesh: Collision Disabled ❌️
Now, the collision parameters for one leg are set. THe other leg, being fully symmetrical, uses the identical settings.
In Isaac Lab, you can use the following command to start reinforcement learning training for the robot in a selected environment:
./isaaclab.sh \
-p scripts/reinforcement_learning/rsl_rl/train.py \
--task=Isaac-Velocity-Rough-Forrest-v0 \
--headless \
--max_iterations=10000 \
--num_envs=4096 \
--resume-
./isaaclab.shLaunches the Isaac Lab wrapper script, which runs the specified Python program and automatically loads the Isaac Lab environment configuration. -
-p scripts/reinforcement_learning/rsl_rl/train.pySpecifies the Python script to execute. Here we use the training entry script for RSL-RL, a reinforcement learning framework based on PyTorch. -
--task=Isaac-Velocity-Rough-Forrest-v0Selects the training task environment.-
Rough: a scenario with complex uneven terrain. -
Forrest: the name of our robot. -
-v0: indicates the version number of this environment. -
Alternatively, you can choose the
Flatscenario (flat terrain), which is useful for training in obstacle-free environments.
-
-
--headlessRuns in headless mode (no rendering). This is recommended when training on servers or with large-scale parallel environments to save GPU/CPU memory and computation resources. -
--max_iterations=10000Sets the maximum number of training iterations, here 10,000 iterations. In each iteration, data is collected from multiple environments and used to update the policy. -
--num_envs=4096Specifies the number of parallel environments, here 4096 environments (or robots) running simultaneously.- The more environments, the faster the data sampling, but the higher the demand on GPU/CPU resources.
- With our cerrent setup (RTX 5080 GPU AND 64 GB of RAM), 4096 environments is the upper limit. Therefore, please choose an appropriate numer of environments when running training.
- These 4096 robots are independent. If you enable rendering, you may sometimes see them visually overlapping, but their trajectories and training batches remain unaffected.
-
--resumeContinues training from a previously saved checkpoint instead of starting from scratch.
./isaaclab.sh \
-p scripts/reinforcement_learning/rsl_rl/play.py \
--task=Isaac-Velocity-Flat-Forrest-Play-v0 \
--num_envs=10-
./isaaclab.shLaunches the Isaac Lab wrapper script, which runs the specified Python program and automatically loads the Isaac Lab environment configuration. -
-p scripts/reinforcement_learning/rsl_rl/play.pySpecifies the Python script to execute. Here we use play.py, which is designed to load a trained policy and run it for testing (letting the robot “play” with what it has learned). -
--task=Isaac-Velocity-Flat-Forrest-Play-v0Selects the task environment to run.-
Flat: a flat terrain scenario without obstacles, making it easier to observe the learned behavior. -
Forrest: the name of our robot. -
Play: indicates this is a test/demo environment for running an already trained policy rather than training from scratch. -
-v0: the version number of this environment.
-
-
--num_envs=10Specifies the number of parallel environments, here 10 robots running simultaneously. This smaller number is convenient for testing and visualization, allowing you to observe the performance of multiple robots at once.
More to come ...
📫 Maintained by RoboTUM Humanoid Project Team
If you have any questions, feel free to open an issue or contact the maintainers on our Slack channel.


