This is a MPC controller implementation for solving the steering control challenge (see full description below). The approach used for solving the challenge was to first identify the dynamic model of the system using the simulated data runs that are provided and analyzing it with the `pysindy' library package. Once the model parameters are available then designing and optimizing a MPC controller also using the simulator on the system.
`pysindy' package provides tools for applying the sparse identification of nonlinear dynamics (SINDy) of dynamical system using data traces taken from system runs. In our case I applied the identification to the rollout runs of the steering simulator to get an aproximated sparse symbolic model of the system. After, some parameter optimizations the model and controller performed well in the challenge ranking (at the time of submission) on the top 14 of the leader board (see details below).
The mpc controller code was adapted from a related problem (credit 1).
Credits are due to: (1) The reference MPC implementation by Mark Misin (see also header in python code) (2) `pysindy' python package : (https://pypi.org/project/pysindy/).
Follow the task's original instructions - use the mpcMainParams controller. Please note that the controller requires to install the python packages: qpsolvers and cvxpy. (see the requirements.txt file)
I used the scripts in the pysindy_optimization directory to explore the model space for a symbolic model.
Once the sparse system model is identified, I have integrated them into mpc controller model. So the controller design consisted of two main steps:
- Model identification using the
pySindyalgorithms - Integration of the model into an
mpc-controllerand tuning its parameters.
I used the sindy's SR3 method for sparse identification. The model identification requires collecting simulations runs (rollouts) traces of the system .
- The simulation runs are done using the challenge's physical model simulater. The script
tinyphysics_opt.pywas adapted to include an extra option to store rollouts as csv files. Note that the stored rollouts also include the control signals of any controller you might have selected. Actually, there is no importance to which controller is selected, empirically I have found that mixing and batching rollouts from several controllers provided the best model results withsindyalgorithm.
An example of the command to collect rollouts using the pid controller:
python tinyphysics_opt.py --model_path ../models/tinyphysics.onnx --data_path ../data --num_segs 100 --controller pid --collect
The rollouts are saved in a new folder: rollout_result/
Please note that pySindy package needs to installed, see instructions in https://pypi.org/project/pysindy/.
The script: steer_modelSR3_sindy.py includes the pySindy SR3 sparse identification using the rollout runs as inputs. Note that you would need to update the script definitions of the rollout sets to the ones you wish to use. (see the train_data, test_data variables in the code).
Operate the script with the following command:
python steer_modelSR3_sindy.py
| controller | lataccel_cost | jerk_cost | total_cost |
|---|---|---|---|
| baseline | 2.351 | 23.781 | 141.308 |
| test | 1.422 | 20.007 | 91.130 |
Machine learning models can drive cars, paint beautiful pictures and write passable rap. But they famously suck at doing low level controls. Your goal is to write a good controller. This repo contains a model that simulates the lateral movement of a car, given steering commands. The goal is to drive this "car" well for a given desired trajectory.
We'll be using a synthetic dataset based on the comma-steering-control dataset for this challenge. These are actual car and road states from openpilot users.
# install required packages
# recommended python==3.11
pip install -r requirements.txt
# test this works
python tinyphysics.py --model_path ./models/tinyphysics.onnx --data_path ./data/00000.csv --debug --controller pid
There are some other scripts to help you get aggregate metrics:
# batch Metrics of a controller on lots of routes
python tinyphysics.py --model_path ./models/tinyphysics.onnx --data_path ./data --num_segs 100 --controller pid
# generate a report comparing two controllers
python eval.py --model_path ./models/tinyphysics.onnx --data_path ./data --num_segs 100 --test_controller pid --baseline_controller zero
You can also use the notebook at experiment.ipynb for exploration.
This is a "simulated car" that has been trained to mimic a very simple physics model (bicycle model) based simulator, given realistic driving noise. It is an autoregressive model similar to ML Controls Sim in architecture. Its inputs are the car velocity (v_ego), forward acceleration (a_ego), lateral acceleration due to road roll (road_lataccel), current car lateral acceleration (current_lataccel), and a steer input (steer_action), then it predicts the resultant lateral acceleration of the car.
Your controller should implement a new controller. This controller can be passed as an arg to run in-loop in the simulator to autoregressively predict the car's response.
Each rollout will result in 2 costs:
-
lataccel_cost:$\dfrac{\Sigma(actual\_lat\_accel - target\_lat\_accel)^2}{steps} * 100$ -
jerk_cost:$\dfrac{\Sigma((actual\_lat\_accel_t - actual\_lat\_accel_{t-1}) / \Delta t)^2}{steps - 1} * 100$
It is important to minimize both costs. total_cost:
Run the following command, then submit report.html and your code to this form.
python eval.py --model_path ./models/tinyphysics.onnx --data_path ./data --num_segs 5000 --test_controller <insert your controller name> --baseline_controller pid
- With this commit we made the simulator more robust to outlier actions and changed the cost landscape to incentivize more aggressive and interesting solutions.
- With this commit we fixed a bug that caused the simulator model to be initialized wrong.
Like this sort of stuff? You might want to work at comma! comma.ai/jobs