RLinf is a flexible and scalable open-source RL infrastructure designed for Embodied and Agentic AI. The 'inf' in RLinf stands for Infrastructure, highlighting its role as a robust backbone for next-generation training. It also stands for Infinite, symbolizing the system’s support for open-ended learning, continuous generalization, and limitless possibilities in intelligence development.
- [2026/02] 🔥 The Technical Report of our realworld online learning system RLinf-USER: A Unified and Extensible System for Real-World Online Policy Learning in Embodied AI is released. Doc: RLinf-USER.
- [2026/02] 🔥 RLinf supports reinforcement learning fine-tuning for Dexbotic. Doc: RL on Dexbotic Model.
- [2026/02] 🔥 RLinf supports reinforcement learning with GSEnv for Real2Sim2Real. Doc: RL with GSEnv.
- [2026/01] 🔥 RLinf supports reinforcement learning fine-tuning for OpenSora World Model. Doc: RL on OpenSora World Model.
- [2026/01] 🔥 RLinf supports reinforcement learning fine-tuning for RoboTwin. Doc: RL on RoboTwin.
- [2026/01] 🔥 RLinf supports SAC training for flow matching policy. Doc: SAC-Flow, paper: SAC Flow: Sample-Efficient Reinforcement Learning of Flow-Based Policies via Velocity-Reparameterized Sequential Modeling. -- [2025/12] 🔥 RLinf supports agentic reinforcement learning on Search-R1. Doc: Search-R1.
- [2025/12] 🔥 RLinf v0.2-pre is open-sourced. We support real-world RL with Franka. Doc: RL on Franka in the RealWorld.
- [2025/12] 🔥 RLinf supports reinforcement learning fine-tuning for RoboCasa. Doc: RL on Robocasa.
- [2025/12] 🎉 RLinf official release of v0.1.
- [2025/11] 🔥 RLinf supports reinforcement learning fine-tuning for CALVIN. Doc: RL on CALVIN.
- [2025/11] 🔥 RLinf supports reinforcement learning fine-tuning for IsaacLab. Doc: RL on IsaacLab.
- [2025/11] 🔥 RLinf supports reinforcement learning fine-tuning for GR00T-N1.5. Doc: RL on GR00T-N1.5.
- [2025/11] 🔥 RLinf supports reinforcement learning fine-tuning for Metaworld. Doc: RL on Metaworld.
- [2025/11] 🔥 RLinf supports reinforcement learning fine-tuning for Behavior 1k. Doc: RL on Behavior 1k.
- [2025/11] Add lora support to π₀ and π₀.₅.
- [2025/10] 🔥 RLinf supports reinforcement learning fine-tuning for π₀ and π₀.₅! Doc: RL on π₀ and π₀.₅ Models. For more technical details, refer to the RL fine-tuning for π₀ and π₀.₅ technical report. The report on πRL by Machine Heart and RoboTech are also released.
- [2025/10] 🔥 RLinf now officially supports online reinforcement learning! Doc: coding_online_rl, Blog post: The first open-source agent online RL framework RLinf-Online.
- [2025/10] 🔥 The RLinf Algorithm Technical Report RLinf-VLA: A Unified and Efficient Framework for VLA+RL Training is released.
- [2025/09] 🔥 Example Gallery is updated, users can find various off-the-shelf examples!
- [2025/09] The paper RLinf: Flexible and Efficient Large-scale Reinforcement Learning via Macro-to-Micro Flow Transformation is released.
- [2025/09] The report on RLinf by Machine Heart is released.
- [2025/08] RLinf is open-sourced. The formal v0.1 will be released soon.
RLinf has high flexibility to support diverse RL training workflows (PPO, GRPO, SAC and so on), while hiding the complexity of distributed programming. Users can easily scale RL training to a large number of GPU nodes without modifying code, meeting the increasing demand of computation for RL training.
The high flexibility allows RLinf to explore more efficient scheduling and execution. The hybrid execution mode for embodied RL achieves up to 2.434× throughput compared to existing frameworks.
Multiple Backend Integrations
- FSDP + HuggingFace/SGLang/vLLM: rapid adaptation to new models and algorithms, ideal for beginners and fast prototyping.
- Megatron + SGLang/vLLM: optimized for large-scale training, delivering maximum efficiency for expert users with demanding workloads.
| Simulators | Real-world Robotics | Models | Algorithms |
|---|---|---|---|
|
|
|
| Single-Agent | Multi-Agent |
|---|---|
|
Installation: Users can refer to our installation guide to install RLinf. We recommend users to use our provided docker image (i.e., Installation Method 1), as the environment and dependencies of embodied RL are complex.
Run a simple example: After setting up the environment, users can run a simple example of embodied RL with ManiSkill3 simulator following this document.
SOTA RL Training Reproduction: RLinf provides end-to-end recipes that reproduce or match state-of-the-art (SOTA) RL results out of the box—users can directly run our configs and scripts to obtain SOTA performance without custom engineering. Check out our example gallery for more details.
RLinf has comprehensive CI tests for both the core components (via unit tests) and end-to-end RL training workflows of embodied, agent, and reasoning scenarios. Below is the summary of the CI test status of the main branch:
| Test Name | Status |
|---|---|
| unit-tests | |
| agent-reason-e2e-tests | |
| embodied-e2e-tests | |
| scheduler-tests |
We welcome contributions to RLinf. Please read contribution guide before taking action. Thank the following contributors and welcome more developers to join us on this open source project.
If you find RLinf helpful, please cite the paper:
@article{yu2025rlinf,
title={RLinf: Flexible and Efficient Large-scale Reinforcement Learning via Macro-to-Micro Flow Transformation},
author={Yu, Chao and Wang, Yuanqing and Guo, Zhen and Lin, Hao and Xu, Si and Zang, Hongzhi and Zhang, Quanlu and Wu, Yongji and Zhu, Chunyang and Hu, Junhao and others},
journal={arXiv preprint arXiv:2509.15965},
year={2025}
}If you use RL+VLA in RLinf, you can also cite our technical report and empirical study paper:
@article{zang2025rlinf,
title={RLinf-VLA: A Unified and Efficient Framework for VLA+ RL Training},
author={Zang, Hongzhi and Wei, Mingjie and Xu, Si and Wu, Yongji and Guo, Zhen and Wang, Yuanqing and Lin, Hao and Shi, Liangzhi and Xie, Yuqing and Xu, Zhexuan and others},
journal={arXiv preprint arXiv:2510.06710},
year={2025}
}@article{liu2025can,
title={What can rl bring to vla generalization? an empirical study},
author={Liu, Jijia and Gao, Feng and Wei, Bingwen and Chen, Xinlei and Liao, Qingmin and Wu, Yi and Yu, Chao and Wang, Yu},
journal={arXiv preprint arXiv:2505.19789},
year={2025}
}@article{chen2025pi_,
title={$$\backslash$pi\_$\backslash$texttt $\{$RL$\}$ $: Online RL Fine-tuning for Flow-based Vision-Language-Action Models},
author={Chen, Kang and Liu, Zhihao and Zhang, Tonghe and Guo, Zhen and Xu, Si and Lin, Hao and Zang, Hongzhi and Zhang, Quanlu and Yu, Zhaofei and Fan, Guoliang and others},
journal={arXiv preprint arXiv:2510.25889},
year={2025}
}If you train your policies in physical world with RLinf, you can cite our paper:
@article{zang2026rlinfuser,
title={RLinf-USER: A Unified and Extensible System for Real-World Online Policy Learning in Embodied AI},
author={Hongzhi Zang and Shu'ang Yu and Hao Lin and Tianxing Zhou and Zefang Huang and Zhen Guo and Xin Xu and Jiakai Zhou and Yuze Sheng and Shizhe Zhang and Feng Gao and Wenhao Tang and Yufeng Yue and Quanlu Zhang and Xinlei Chen and Chao Yu and Yu Wang},
year={2026},
journal={arXiv preprint arXiv:2602.07837},
url={https://arxiv.org/abs/2602.07837},
}Acknowledgements RLinf has been inspired by, and benefits from, the ideas and tooling of the broader open-source community. In particular, we would like to thank the teams and contributors behind VeRL, AReaL, Megatron-LM, SGLang, and PyTorch Fully Sharded Data Parallel (FSDP), and if we have inadvertently missed your project or contribution, please open an issue or a pull request so we can properly credit you.
Contact: We welcome applications from Postdocs, PhD/Master's students, and interns. Join us in shaping the future of RL infrastructure and embodied AI!
- Chao Yu: [email protected]
- Yu Wang: [email protected]