RLinf is a flexible and scalable open-source infrastructure designed for post-training foundation models via reinforcement learning. The 'inf' in RLinf stands for Infrastructure, highlighting its role as a robust backbone for next-generation training. It also stands for Infinite, symbolizing the system’s support for open-ended learning, continuous generalization, and limitless possibilities in intelligence development.
- [2025/08] RLinf is open-sourced. The formal v0.1 will be released soon. The paper RLinf: Flexible and Efficient Large-scale Reinforcement Learning via Macro-to-Micro Flow Transformation will also be released accordingly.
RLinf is unique with:
-
Macro-to-Micro Flow: a new paradigm M2Flow, which executes macro-level logical flows through micro-level execution flows, decoupling logical workflow construction (programmable) from physical communication and scheduling (efficiency).
-
Flexible Execution Modes
- Collocated mode: shares all GPUs across all workers.
- Disaggregated mode: enables fine-grained pipelining.
- Hybrid mode: a customizable combination of different placement modes, integrating both collocated and disaggregated modes.
-
Auto-scheduling Strategy: automatically selects the most suitable execution mode based on the training workload, without the need for manual resource allocation.
-
Embodied Agent Support
- Fast adaptation support for mainstream VLA models: OpenVLA, OpenVLA-OFT, and π₀.
- Support for mainstream CPU & GPU-based simulators via standardized RL interfaces: ManiSkill3, LIBERO.
- Enabling the first RL fine-tuning of the
$\pi_0$ model family with a flow-matching action expert.
RLinf is fast with:
- Hybrid mode with fine-grained pipelining: achieves a 120%+ throughput improvement compared to other frameworks.
- Automatic Online Scaling Strategy: dynamically scales training resources, with GPU switching completed within seconds, further improving efficiency by 20–40% while preserving the on-policy nature of RL algorithms.
RLinf is flexible and easy to use with:
-
Multiple Backend Integrations
- FSDP + Hugging Face: rapid adaptation to new models and algorithms, ideal for beginners and fast prototyping.
- Megatron + SGLang: optimized for large-scale training, delivering maximum efficiency for expert users with demanding workloads.
-
Adaptive communication via the asynchronous communication channel
-
Built-in support for popular RL methods, including PPO, GRPO, DAPO, Reinforce++, and more.
- Support for heterogeneous GPUs
- Support for asynchronous pipeline execution
- Support for Mixture of Experts (MoE)
- Support for vLLM inference backend
- Support for Vision-Language Models (VLMs) training
- Support for deep searcher agent training
- Support for multi-agent training
- Support for integration with more embodied simulators (e.g., Meta-World, GENESIS)
- Support for more Vision Language Action models (VLAs), such as GR00T
- Support for world model
- Support for real-world RL embodied intelligence
Complete documentation for RLinf can be found Here.
Quickstart
- Installation
- Quickstart 1: PPO Training of VLAs on Maniskill3
- Quickstart 2: GRPO Training of LLMs on MATH
- Multi-node Training
- Model Evaluation
Key Design
- Unified User Interface Usage
- Flexible Execution Modes
- Enable Automatic Scheduling
- Elastic Communication
Example Gallery
Advanced Features
- 5D Parallelism Configuration for Megatron-LM
- LoRA Integration for efficient fine-tuning
- Switch between different versions of SGLang
- Checkpoint Resume and Recovery Support
Extending The Framework:
- Adding new Environments
- Adding new Models with FSDP+Huggingface backend
- Adding new Models with Megatron+SGLang backend
Blogs
| Type | Status |
|---|---|
| Reasoning RL-MATH | |
| Embodied RL-VLA |
We welcome contributions to RLinf. Please read contribution guide before taking action.
If you find RLinf helpful, please cite the GitHub repository:
@misc{RLinf_repo,
title = {RLinf: Reinforcement Learning Infrastructure for Agentic AI},
howpublished = {\url{https://github.com/RLinf/RLinf}},
note = {GitHub repository},
year = {2025}
}Paper: A full paper describing RLinf will be released by September 20, 2025. We will update this section with the official citation and BibTeX when they become available.
Acknowledgements RLinf has been inspired by, and benefits from, the ideas and tooling of the broader open-source community. In particular, we would like to thank the teams and contributors behind VeRL, AReaL, Megatron-LM, SGLang, and PyTorch Fully Sharded Data Parallel (FSDP), and if we have inadvertently missed your project or contribution, please open an issue or a pull request so we can properly credit you.
Contact: We welcome applications from Postdocs, PhD/Master's students, and interns. Join us in shaping the future of RL infrastructure and embodied AI!
- Chao Yu: [email protected]
- Yu Wang: [email protected]