leap-c provides tools for learning optimal control policies using Imitation learning (IL) and Reinforcement Learning (RL) to enhance Model Predictive Control (MPC) algorithms. It is built on top of CasADi, acados and PyTorch.
leap-c can be set up by following the installation guide.
Please see the Getting started section.
Open a new thread or browse the existing ones on the GitHub discussions page.
If you are using code from this repository in your work, please use the following citation for now
@software{leap-c,
author = {Leonard Fichtner and
Dirk Reinhardt and
Jasper Hoffmann and
Filippo Airaldi and
Jonathan Frey and
Josip Kir Hromatko and
Katrin Baumgaertner and
Mazen Amria and
Rudolf Reiter and
Shambhuraj Sawant},
title = {leap-c/leap-c: v0.1.0-alpha},
month = oct,
year = 2025,
publisher = {Zenodo},
version = {v0.1.0-alpha},
doi = {10.5281/zenodo.17244101},
url = {https://doi.org/10.5281/zenodo.17244101},
swhid = {swh:1:dir:ed535c814cc331317c03dd13d3ccc782dbb05ff2
;origin=https://doi.org/10.5281/zenodo.17244100;vi
sit=swh:1:snp:f81ddd085e543de1b55450efd7305d2a138c
906e;anchor=swh:1:rel:10559cfe0ced879c85db62a28d2a
c1e9f27c187a;path=leap-c-leap-c-9159013
},
}
The following projects follow similar ideas and might be interesting:
- mpc.pytorch: Early work on embedding MPC in PyTorch for end-to-end learning, with a more restricted class of MPC problems
- mpcrl: A simpler codebase for using RL with MPC as function approximator
- Neuromancer: A differentiable programming library that allows to include parametric optimization layers (including MPC) in PyTorch computational graphs
- ntnu-itk-autonomous-ship-lab/rlmpc: A codebase tailored for marine vessel control using RL and MPC