A lightweight, YAML-driven robot simulator for navigation, control, and learning
IR-SIM is an open-source, Python-based, lightweight robot simulator designed for navigation, control, and learning. It provides a simple, user-friendly framework with built-in collision detection for modeling robots, sensors, and environments. Ideal for academic and educational use, IR-SIM enables rapid prototyping of robotics and learning algorithms in custom scenarios with minimal coding and hardware requirements.
- Simulate robot platforms with diverse kinematics, sensors, and behaviors (support).
- Quickly configure and customize scenarios using straightforward YAML files. No complex coding required.
- Visualize simulation outcomes using a naive visualizer matplotlib for immediate debugging.
- Support collision detection and customizable behavior policies for each object.
- Suitable for mutli-agent/robot learning (Projects).
|
Multi-Robot RVO Collision Avoidance Source |
Ackermann Robot with 2D LiDAR Source |
HM3D / MatterPort3D Grid Map Source |
|
Field-of-View Detection Source |
Dynamic Random Obstacles Source |
200-Agent ORCA via pyrvo Source |
Requires Python >= 3.10
pip install ir-sim
# Optional: keyboard control and all extras
pip install ir-sim[all]git clone https://github.com/hanruihua/ir-sim.git
cd ir-sim
pip install -e .git clone https://github.com/hanruihua/ir-sim.git
cd ir-sim
uv syncA minimal example: a differential-drive robot navigates toward a goal using the built-in dash behavior.
import irsim
env = irsim.make('robot_world.yaml') # initialize the environment with the configuration file
for i in range(300): # run the simulation for 300 steps
env.step() # update the environment
env.render() # render the environment
if env.done(): break # check if the simulation is done
env.end() # close the environmentYAML Configuration: robot_world.yaml
world:
height: 10 # the height of the world
width: 10 # the width of the world
step_time: 0.1 # 10Hz calculate each step
sample_time: 0.1 # 10 Hz for render and data extraction
offset: [0, 0] # the offset of the world on x and y
robot:
kinematics: {name: 'diff'} # omni, diff, acker
shape: {name: 'circle', radius: 0.2} # radius
state: [1, 1, 0] # x, y, theta
goal: [9, 9, 0] # x, y, theta
behavior: {name: 'dash'} # move toward to the goal directly
color: 'g' # greenFor more examples, see the usage directory and the documentation.
| Category | Features |
|---|---|
| Kinematics | Differential Drive mobile Robot · Omnidirectional mobile Robot · Ackermann Steering mobile Robot |
| Sensors | 2D LiDAR · FOV Detector |
| Geometries | Circle · Rectangle · Polygon · LineString · Binary Grid Map |
| Behaviors | dash (move directly toward goal) · RVO (Reciprocal Velocity Obstacle) · ORCA (Optimal Reciprocal Collision Avoidance) |
- English: https://ir-sim.readthedocs.io/en
- Chinese (中文): https://ir-sim.readthedocs.io/zh-cn
- [RAL & ICRA 2023] rl-rvo-nav -- Reinforcement learning-based RVO behavior for multi-robot navigation.
- [RAL & IROS 2023] RDA_planner -- Accelerated collision-free motion planner for cluttered environments.
- [T-RO 2025] NeuPAN -- Direct point robot navigation with end-to-end model-based learning.
- DRL-robot-navigation-IR-SIM -- Deep reinforcement learning for robot navigation.
- AutoNavRL -- Autonomous navigation using reinforcement learning.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
IR-SIM is released under the MIT License.