This repository contains the official implementation for the paper "Versatile Loco-Manipulation through Flexible Interlimb Coordination", accepted to CoRL 2025.
Project Website | arXiv Paper | Blog Post | X | Threads
Our ReLIC policy enables a quadruped robot to walk with three legs and manipulate with the arm and one leg.
NEWS: Discover how ReLIC enables autonomous, dynamic whole-body manipulation through a sampling-based optimizer: Blog Post | X | YouTube
Disclaimer: This code is released as a research prototype and is not intended for production use. It may contain incomplete features or bugs. The RAI Institute does not provide maintenance or support for this software. Contributions via pull requests are welcome.
Reinforcement Learning for Interlimb Coordination (ReLIC) is an approach that enables versatile loco-manipulation through flexible interlimb coordination. The core of our method is a single, adaptive controller that learns to seamlessly coordinate all of a robot's limbs. It intelligently bridges the execution of precise manipulation motions with the generation of stable, dynamic gaits, allowing the robot to interact with its environment in a versatile and robust manner.
This project uses Pixi to manage dependencies and ensure a consistent development environment. You won't need to use conda or virtualenv separately or manually install IsaacSim or IsaacLab. Just follow the steps below to get started.
-
Install Pixi
curl -fsSL https://pixi.sh/install.sh | sh -
Clone the repository
git clone https://github.com/bdaiinstitute/relic.git && cd relic
-
Install dependencies
pixi install
-
Activate the environment
pixi shell
Alternatively, you can install the project without Pixi by following the standard installation guides for IsaacLab and its extensions.
python scripts/rsl_rl/train.py --task Isaac-Spot-Interlimb-Phase-1-v0 --headless
python scripts/rsl_rl/play.py --task Isaac-Spot-Interlimb-Play-v0 --centerTo achieve optimal deployment results, we implemented a weight curriculum with multiple training phases. Users can fine-tune the models from Phase-2 to Phase-4 to reproduce the results presented in our paper. The pre-trained weights can be found in relic/source/relic/relic/assets/spot/pretrained.
@inproceedings{
zhu2025versatile,
title={Versatile Loco-Manipulation through Flexible Interlimb Coordination},
author={Xinghao Zhu and Yuxin Chen and Lingfeng Sun and Farzad Niroui and Simon Le Cleac'h and Jiuguang Wang and Kuan Fang},
booktitle={9th Annual Conference on Robot Learning},
year={2025},
url={https://openreview.net/forum?id=Spg25qkV81}
}
We use RSL_RL for RL training and adapt the following scripts from IsaacLabExtensionTemplate.
scripts/rsl_rlsource/relic/pyproject.tomlsource/relic/setup.py