Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views281 pages

MPC MHE Slides

Uploaded by

mzwtju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views281 pages

MPC MHE Slides

Uploaded by

mzwtju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 281

Optimization based Solutions for Control and

State Estimation in Dynamical Systems


(Implementation to Mobile Robots)
A Workshop
Presenter:
Mohamed W. Mehrez, PhD
WEBSITE
Postdoctoral Fellow
Mechanical and Mechatronics Engineering,
University of Waterloo,
ON, Canada

January, 2019
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 1
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 2
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 3
• Background1
Why Optimization?

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 4
• Background1
Why Optimization?
Optimization algorithms are used in many applications from diverse areas.

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 4
• Background1
Why Optimization?
Optimization algorithms are used in many applications from diverse areas.

• Business: Allocation of resources in logistics, investment, etc.


• Science: Estimation and fitting of models to measurement data, design of
experiments.
• Engineering: Design and operation of technical systems, e.g. bridges,
cars, aircraft, digital devices, etc.

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 4
• Background1
Why Optimization?
Optimization algorithms are used in many applications from diverse areas.

• Business: Allocation of resources in logistics, investment, etc.


• Science: Estimation and fitting of models to measurement data, design of
experiments.
• Engineering: Design and operation of technical systems, e.g. bridges,
cars, aircraft, digital devices, etc.

What Characterizes an Optimization Problem?


An optimization problem consists of the following three ingredients.

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 4
• Background1
Why Optimization?
Optimization algorithms are used in many applications from diverse areas.

• Business: Allocation of resources in logistics, investment, etc.


• Science: Estimation and fitting of models to measurement data, design of
experiments.
• Engineering: Design and operation of technical systems, e.g. bridges,
cars, aircraft, digital devices, etc.

What Characterizes an Optimization Problem?


An optimization problem consists of the following three ingredients.

• An objective function, ϕ(𝐰), that shall be minimized or maximized,


• decision variables, 𝐰, that can be chosen, and
• constraints that shall be respected, e.g. of the form 𝒈1 𝒘 = 0 (equality
constraints) or 𝒈2 𝒘 ≥ 0 (inequality constraints).

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 4
• Background1

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 5
• Background1
Mathematical Formulation in Standard Form

Nonlinear Programming Problem (NLP) : A standard problem formulation in


numerical optimization

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 5
• Background1
Mathematical Formulation in Standard Form

Nonlinear Programming Problem (NLP) : A standard problem formulation in


numerical optimization

min  (w ) Objective function


w

s.t. g1 (w )  0 , Inequality constraints


g 2 (w ) = 0. Equality constraints

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 5
• Background1
Mathematical Formulation in Standard Form

Nonlinear Programming Problem (NLP) : A standard problem formulation in


numerical optimization

min  (w ) Objective function


w

s.t. g1 (w )  0 , Inequality constraints


g 2 (w ) = 0. Equality constraints

ϕ ⋅ , 𝒈1 ⋅ , and 𝒈2 ⋅ are usually assumed to be differentiable

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 5
• Background1
Mathematical Formulation in Standard Form

Nonlinear Programming Problem (NLP) : A standard problem formulation in


numerical optimization

min  (w ) Objective function


w

s.t. g1 (w )  0 , Inequality constraints


g 2 (w ) = 0. Equality constraints

ϕ ⋅ , 𝒈1 ⋅ , and 𝒈2 ⋅ are usually assumed to be differentiable

Special cases of NLP include:

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 5
• Background1
Mathematical Formulation in Standard Form

Nonlinear Programming Problem (NLP) : A standard problem formulation in


numerical optimization

min  (w ) Objective function


w

s.t. g1 (w )  0 , Inequality constraints


g 2 (w ) = 0. Equality constraints

ϕ ⋅ , 𝒈1 ⋅ , and 𝒈2 ⋅ are usually assumed to be differentiable

Special cases of NLP include:


• Linear Programming (LP) (when ϕ ⋅ , 𝒈1 ⋅ , and 𝒈2 ⋅ are affine, i.e. these
functions can be expressed as linear combinations of the elements of 𝐰).
• Quadratic Programming (QP) (when 𝒈1 ⋅ , and 𝒈2 ⋅ are affine, but the
objective ϕ ⋅ is a linear-quadratic function).
• …
1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 5
• Background1
Min or Max

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 6
• Background1
Min or Max
For a given objective function maximization can be treated as a minimization of
the negative objective.

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 6
• Background1
Min or Max
For a given objective function maximization can be treated as a minimization of
the negative objective.

Minimization
min  (w )
w

s.t. g1 (w )  0 ,
g 2 (w ) = 0.

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 6
• Background1
Min or Max
For a given objective function maximization can be treated as a minimization of
the negative objective.

Minimization Maximization
min  (w ) max  (w )
w w

s.t. g1 (w )  0 , s.t. g1 (w )  0 , 
g 2 (w ) = 0. g 2 (w ) = 0.

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 6
• Background1
Min or Max
For a given objective function maximization can be treated as a minimization of
the negative objective.

Minimization Maximization
min  (w ) max  (w ) min −  (w )
w w w

s.t. g1 (w )  0 , s.t. g1 (w )  0 ,  s.t. g1 (w )  0 ,


g 2 (w ) = 0. g 2 (w ) = 0. g 2 (w ) = 0.

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 6
• Background1
Min or Max
For a given objective function maximization can be treated as a minimization of
the negative objective.

Minimization Maximization
min  (w ) max  (w ) min −  (w )
w w w

s.t. g1 (w )  0 , s.t. g1 (w )  0 ,  s.t. g1 (w )  0 ,


g 2 (w ) = 0. g 2 (w ) = 0. g 2 (w ) = 0.

Local Vs. Global minimum


Example:

min e 0.2w  sin( w)


w

s.t. w  0 ,
w  4 .

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 6
• Background1
Min or Max
For a given objective function maximization can be treated as a minimization of
the negative objective.

Minimization Maximization
min  (w ) max  (w ) min −  (w )
w w w

s.t. g1 (w )  0 , s.t. g1 (w )  0 ,  s.t. g1 (w )  0 ,


g 2 (w ) = 0. g 2 (w ) = 0. g 2 (w ) = 0.

Local Vs. Global minimum


Example:
- One Global minimizer, and Three (3)
min e 0.2w  sin( w)
local local minimizers.
w - Global minimization is normally
s.t. w  0 , challenging and time consuming.
w  4 . Therefore, in this workshop we focus
on the use of local optimization
packages.
1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 6
• Background1
Solution of the optimization problem

min  (w )
w

s.t. g1 (w )  0 ,
g 2 (w ) = 0.

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 7
• Background1
Solution of the optimization problem

min  (w )
w

s.t. g1 (w )  0 ,
g 2 (w ) = 0.

Normally we are looking at the value of w that minimizes our objective

w  = arg min  (w )
w

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 7
• Background1
Solution of the optimization problem

min  (w )
w

s.t. g1 (w )  0 ,
g 2 (w ) = 0.

Normally we are looking at the value of w that minimizes our objective

w  = arg min  (w )
w

By direct substitution we can get the corresponding value of the objective function

 (w  ) :=  (w ) w 
:= min  (w )
w

1Moritz Diehl, Lecture Notes on Numerical Optimization, 2016


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 7
• Motivation Example 1
Find the local minimum of the following function
 ( w) = w2 − 6w + 13

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
 ( w) = w2 − 6w + 13
Writing this problem as an NLP, we have
min w2 − 6 w + 13
w

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
 ( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
optimizer.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
 ( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
 ( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw
Then, we find the point at which the
gradient is zero, i.e. the solution of
the minimization problem
w=3

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
 ( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw
Then, we find the point at which the
gradient is zero, i.e. the solution of
the minimization problem
w=3
The value of ϕ w at the solution (𝑤 = 3)
is equal to 4.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
 ( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw
Then, we find the point at which the
gradient is zero, i.e. the solution of
the minimization problem
w=3
The value of ϕ w at the solution (𝑤 = 3)
is equal to 4.

Now, we would like to find this solution


numerically using an optimization package.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
CasADi1 https://web.casadi.org/get/

Build efficient optimal control software, with minimal effort.

1Joel Andersson, introduction to casadi, 2015

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 9
CasADi1 https://web.casadi.org/get/

Build efficient optimal control software, with minimal effort.

• Has a general scope of Numerical optimization.


• In particular, it facilitates the solution of NLP’s
• Facilitates, not actually solves the NLP’s
• Solver\plugins" can be added post-installation.
• Free & open-source (LGPL), also for commercial use.
• Project started in December 2009, now at version 3.4.5 (August, 2018)
• 4 standard problems can be handled by CasADi
• QP’s (Quadratic programs)
• NLP’s (Nonlinear programs)
• Root finding problems
• Initial-value problems in ODE/DAE

1Joel Andersson, introduction to casadi, 2015

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 9
• Motivation Example 1
Find the local minimum of the following function
 ( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw
Then, we find the point at which the
gradient is zero, i.e. the solution of
the minimization problem
w=3
The value of ϕ w at the solution (𝑤 = 3)
is equal to 4.

Now, we would like to find this solution


numerically using an optimization package.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 10
Solving using CasADi

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

x = SX.sym('w'); % Decision variables


obj = x^2-6*x+13 ; % calculate obj

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = x; %single decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

x = SX.sym('w'); % Decision variables


obj = x^2-6*x+13 ; % calculate obj

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = x; %single decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

SX data type is used to represent matrices whose


elements consist of symbolic expressions

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

x = SX.sym('w'); % Decision variables


obj = x^2-6*x+13 ; % calculate obj

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = x; %single decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

SX data type is used to represent matrices whose


elements consist of symbolic expressions

>> x
x =
w

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

x = SX.sym('w'); % Decision variables


obj = x^2-6*x+13 ; % calculate obj

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = x; %single decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

SX data type is used to represent matrices whose


elements consist of symbolic expressions

>> x >> obj


x = obj =
w ((sq(w)-(6*w))+13)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

x = SX.sym('w'); % Decision variables


obj = x^2-6*x+13 ; % calculate obj

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = x; %single decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

SX data type is used to represent matrices whose


elements consist of symbolic expressions

>> x >> obj


x = obj =
w ((sq(w)-(6*w))+13)

>> nlp_prob
struct with fields:
f: [1×1 casadi.SX]
x: [1×1 casadi.SX]
g: []
p: []
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
opts = struct;
opts.ipopt.max_iter = 100;
opts.ipopt.print_level = 0; %0,3
opts.print_time = 0; %0,1
opts.ipopt.acceptable_tol =1e-8;
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6;

solver = nlpsol('solver', 'ipopt', nlp_prob,opts);

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
opts = struct; Ipopt (Interior Point Optimizer)* is
opts.ipopt.max_iter = 100; an open source software package for
opts.ipopt.print_level = 0; %0,3 large-scale nonlinear optimization. It
opts.print_time = 0; %0,1 can be used to solve general
opts.ipopt.acceptable_tol =1e-8;
nonlinear programming problems
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6; (NLPs)
* Check IPOPT manual for more details
solver = nlpsol('solver', 'ipopt', nlp_prob,opts); about the options you can set.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
opts = struct; Ipopt (Interior Point Optimizer)* is
opts.ipopt.max_iter = 100; an open source software package for
opts.ipopt.print_level = 0; %0,3 large-scale nonlinear optimization. It
opts.print_time = 0; %0,1 can be used to solve general
opts.ipopt.acceptable_tol =1e-8;
nonlinear programming problems
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6; (NLPs)
* Check IPOPT manual for more details
solver = nlpsol('solver', 'ipopt', nlp_prob,opts); about the options you can set.
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization

args.p = []; % There are no parameters in this optimization problem


args.x0 = -0.5; % initialization of the optimization variable

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x) % Get the solution
min_value = full(sol.f) % Get the value function

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
opts = struct; Ipopt (Interior Point Optimizer)* is
opts.ipopt.max_iter = 100; an open source software package for
opts.ipopt.print_level = 0; %0,3 large-scale nonlinear optimization. It
opts.print_time = 0; %0,1 can be used to solve general
opts.ipopt.acceptable_tol =1e-8;
nonlinear programming problems
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6; (NLPs)
* Check IPOPT manual for more details
solver = nlpsol('solver', 'ipopt', nlp_prob,opts); about the options you can set.
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization

args.p = []; % There are no parameters in this optimization problem


args.x0 = -0.5; % initialization of the optimization variable

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x) % Get the solution
min_value = full(sol.f) % Get the value function

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
opts = struct; Ipopt (Interior Point Optimizer)* is
opts.ipopt.max_iter = 100; an open source software package for
opts.ipopt.print_level = 0; %0,3 large-scale nonlinear optimization. It
opts.print_time = 0; %0,1 can be used to solve general
opts.ipopt.acceptable_tol =1e-8;
nonlinear programming problems
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6; (NLPs)
* Check IPOPT manual for more details
solver = nlpsol('solver', 'ipopt', nlp_prob,opts); about the options you can set.
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization

args.p = []; % There are no parameters in this optimization problem


args.x0 = -0.5; % initialization of the optimization variable

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x) % Get the solution
min_value = full(sol.f) % Get the value function
>>
x_sol =
3
min_value =
4
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
opts = struct; Ipopt (Interior Point Optimizer)* is
opts.ipopt.max_iter = 100; an open source software package for
opts.ipopt.print_level = 0; %0,3 large-scale nonlinear optimization. It
opts.print_time = 0; %0,1 can be used to solve general
opts.ipopt.acceptable_tol =1e-8;
nonlinear programming problems
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6; (NLPs)
* Check IPOPT manual for more details
solver = nlpsol('solver', 'ipopt', nlp_prob,opts); about the options you can set.
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization

args.p = []; % There are no parameters in this optimization problem


args.x0 = -0.5; % initialization of the optimization variable

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x) % Get the solution
min_value = full(sol.f) % Get the value function
>> Remarks:
x_sol =
- Single optimization variable
3
min_value = - Unconstrained optimization
4 - Local minimum = Global minimum
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
• Motivation Example 2
Solve the following optimization problem

min e 0.2w  sin( w)


w

s.t. w  0 ,
w  4 .

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 13
• Motivation Example 2
Solve the following optimization problem

min e 0.2w  sin( w)


w

s.t. w  0 ,
w  4 .

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 13
Solving using CasADi

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 14
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

x = SX.sym('w'); % Decision variables (controls)


obj = exp(0.2*x).*sin(x); % calculate obj

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = x; %single decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

opts = struct;
opts.ipopt.max_iter = 1000;
opts.ipopt.print_level = 3; %0,3
opts.print_time = 0; %0,1
opts.ipopt.acceptable_tol =1e-8; % optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6;

solver = nlpsol('solver', 'ipopt', nlp_prob,opts);


min e 0.2w  sin( w)
w
args = struct;
args.lbx = 0;
args.ubx = 4*pi;
%
%
constrained optimization
constrained optimization
s.t. w  0 ,
args.lbg = -inf;
args.ubg = inf;
%
%
unconstrained
unconstrained w  4 .
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 14
Solving using CasADi
args.p = []; % There are no parameters in this optimization problem
args.x0 = 1; % initialization of the optimization problem

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x) % Get the solution
min_value = full(sol.f) % Get the value function

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
Solving using CasADi
args.p = []; % There are no parameters in this optimization problem
args.x0 = 1; % initialization of the optimization problem

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x) % Get the solution
min_value = full(sol.f) % Get the value function

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
Solving using CasADi
args.p = []; % There are no parameters in this optimization problem
args.x0 = 1; % initialization of the optimization problem

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x) % Get the solution
min_value = full(sol.f) % Get the value function

>>
args.x0 =
1
x_sol =
0

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
Solving using CasADi
args.p = []; % There are no parameters in this optimization problem
args.x0 = 1; % initialization of the optimization problem

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x) % Get the solution
min_value = full(sol.f) % Get the value function

>>
args.x0 =
1
x_sol =
0

>>
args.x0 =
4
x_sol =
4.9098

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
Solving using CasADi
args.p = []; % There are no parameters in this optimization problem
args.x0 = 1; % initialization of the optimization problem

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x) % Get the solution
min_value = full(sol.f) % Get the value function

>>
args.x0 =
1
x_sol =
0

>>
args.x0 =
4
x_sol =
4.9098

>>
args.x0 =
10
x_sol =
11.1930

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.
ndata
 (m, c) =  ( y (i ) − (m  x(i ) + c) )
2

i =1

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.
ndata
 (m, c) =  ( y (i ) − (m  x(i ) + c) )
2

i =1

• Therefore, the optimization problem


can be written as:
ndata
min  ( y (i ) − (m  x(i ) + c) )
2
m ,c
i =1

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.
ndata
 (m, c) =  ( y (i ) − (m  x(i ) + c) )
2

i =1

• Therefore, the optimization problem


can be written as:
ndata
min  ( y (i ) − (m  x(i ) + c) )
2
m ,c
i =1

• Again, can be treated as an unconstrained


optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
Implementation:
clear all
close all
clc

x = [0,45,90,135,180];
y = [667,661,757,871,1210];

line_width = 1.5; fontsize_labels = 15;


set(0,'DefaultAxesFontName', 'Times New Roman')
set(0,'DefaultAxesFontSize', fontsize_labels)

figure(1)
plot(x,y,'*b', 'linewidth',line_width); hold on
xlabel('x')
ylabel('y')
grid on

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 17
% CasADi v3.1.1
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

m = SX.sym('m'); % Decision variable (slope)


c = SX.sym('c'); % Decision variable (y-intersection)

obj = 0;
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2;
end

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = [m,c]; %Two decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 18
% CasADi v3.1.1
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

m = SX.sym('m'); % Decision variable (slope)


c = SX.sym('c'); % Decision variable (y-intersection)

obj = 0; ndata
min  ( y (i ) − (m  x(i ) + c) )
2
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2; m ,c
i =1
end

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = [m,c]; %Two decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 18
% CasADi v3.1.1
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

m = SX.sym('m'); % Decision variable (slope)


c = SX.sym('c'); % Decision variable (y-intersection)

obj = 0; ndata
min  ( y (i ) − (m  x(i ) + c) )
2
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2; m ,c
i =1
end

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = [m,c]; %Two decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

>> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-((90*m)+c))))+sq((871-
((135*m)+c))))+sq((1210-((180*m)+c)))))

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 18
% CasADi v3.1.1
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

m = SX.sym('m'); % Decision variable (slope)


c = SX.sym('c'); % Decision variable (y-intersection)

obj = 0; ndata
min  ( y (i ) − (m  x(i ) + c) )
2
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2; m ,c
i =1
end

g = []; % Optimization constraints – empty (unconstrained)


P = []; % Optimization problem parameters – empty (no parameters used here)

OPT_variables = [m,c]; %Two decision variable


nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

>> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-((90*m)+c))))+sq((871-
((135*m)+c))))+sq((1210-((180*m)+c)))))
opts = struct;
opts.ipopt.max_iter = 1000;
opts.ipopt.print_level = 0; %0,3
opts.print_time = 0; %0,1
opts.ipopt.acceptable_tol =1e-8; % optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6;

solver = nlpsol('solver', 'ipopt', nlp_prob,opts);


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 18
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization

args.p = []; % There are no parameters in this optimization problem


args.x0 = [0.5,1]; % initialization of the optimization variables

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x); % Get the solution
min_value = full(sol.f) % Get the value function

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 19
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization

args.p = []; % There are no parameters in this optimization problem


args.x0 = [0.5,1]; % initialization of the optimization variables

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x); % Get the solution
min_value = full(sol.f) % Get the value function

x_line = [0:1:180];
m_sol = x_sol(1)
c_sol = x_sol(2)
y_line = m_sol*x_line+c_sol;
figure(1)
plot(x_line,y_line,'-k', 'linewidth',line_width); hold on
legend('Data points','y = 2.88 \cdot x + 574')

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 19
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization

args.p = []; % There are no parameters in this optimization problem


args.x0 = [0.5,1]; % initialization of the optimization variables

sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...


'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
x_sol = full(sol.x); % Get the solution
min_value = full(sol.f) % Get the value function

x_line = [0:1:180];
m_sol = x_sol(1)
c_sol = x_sol(2)
y_line = m_sol*x_line+c_sol;
figure(1)
plot(x_line,y_line,'-k', 'linewidth',line_width); hold on
legend('Data points','y = 2.88 \cdot x + 574')

m_sol = min_value =
2.8800 38527
c_sol =
574.0000

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 19
Result visualization:

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 20
Result visualization:

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 20
Result visualization:

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 20
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)

obj = 0;
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2;
end

>> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)
% Function object in casadi
obj = 0;
obj_fun = Function('obj_fun',{m,c},{obj}); for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2;
end

>> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)
% Function object in casadi
obj = 0;
obj_fun = Function('obj_fun',{m,c},{obj}); for i = 1:length(x)
m_range = [-1:0.5:6]; obj = obj+ (y(i) - (m*x(i)+c))^2;
end
c_range = [400:50:800];
obj_plot_data = []; >> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
[mm,cc] = meshgrid(m_range,c_range); ((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))
for n = 1:1:size(mm,1)
for k = 1:1:size(mm,2) %
obj_plot_data(n,k) = full(obj_fun(mm(n,k),cc(n,k))) ;
end
end

figure
surf(mm,cc,obj_plot_data); hold on
xlabel('(m)')
ylabel('(c)')
zlabel('(\phi)')

box on
ax = gca;
ax.BoxStyle = 'full';

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)
% Function object in casadi
obj = 0;
obj_fun = Function('obj_fun',{m,c},{obj}); for i = 1:length(x)
m_range = [-1:0.5:6]; obj = obj+ (y(i) - (m*x(i)+c))^2;
end
c_range = [400:50:800];
obj_plot_data = []; >> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
[mm,cc] = meshgrid(m_range,c_range); ((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))
for n = 1:1:size(mm,1)
for k = 1:1:size(mm,2) %
obj_plot_data(n,k) = full(obj_fun(mm(n,k),cc(n,k))) ;
end
end

figure
surf(mm,cc,obj_plot_data); hold on
xlabel('(m)')
ylabel('(c)')
zlabel('(\phi)')

box on
ax = gca;
ax.BoxStyle = 'full';

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)
% Function object in casadi
obj = 0;
obj_fun = Function('obj_fun',{m,c},{obj}); for i = 1:length(x)
m_range = [-1:0.5:6]; obj = obj+ (y(i) - (m*x(i)+c))^2;
end
c_range = [400:50:800];
obj_plot_data = []; >> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
[mm,cc] = meshgrid(m_range,c_range); ((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))
for n = 1:1:size(mm,1)
for k = 1:1:size(mm,2) %
obj_plot_data(n,k) = full(obj_fun(mm(n,k),cc(n,k))) ;
end
end

figure
surf(mm,cc,obj_plot_data); hold on
xlabel('(m)')
ylabel('(c)')
zlabel('(\phi)')

box on
ax = gca;
ax.BoxStyle = 'full';

min_value =
>> 38527
min(min(obj_plot_data))
ans = Minimum value
39690 from optimization

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 22
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Single input single output simple example

x(k + 1) = f ( x(k ), u (k ))

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past
Single input single output simple example

x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].

u ()

k t

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].

u ()

k t

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].

u ()

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].

u ()

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].

u ()

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].

u ()

u * (k )

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))

x()

u ()

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))

x()

u ()

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant

u ()

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant

u ()

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant

u ()

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant

u ()

u * (k )

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)

Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant

u ()
MPC Strategy Summary1:
1. Prediction
u * (k )
2. Online optimization
3. Receding horizon implementation
k t
1Mark Cannon (2016) k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• MPC Mathematical Formulation

Past Future
xr

x()

u () u * (k )

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R

x()

u () u * (k )

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R

x()

u () u * (k )

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R

x()

u () u * (k )

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R

Cost Function: Evaluation of the running costs along x()


the whole prediction horizon
N −1
J N (x, u) =  (x u (k ), u(k ))
k =0

u () u * (k )

k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R

Cost Function: Evaluation of the running costs along x()


the whole prediction horizon
N −1
J N (x, u) =  (x u (k ), u(k ))
k =0

Optimal Control Problem (OCP): to find a minimizing


control sequence
N −1
minimize J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0 u () u * (k )
subject to : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]
k t

k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R

Cost Function: Evaluation of the running costs along x()


the whole prediction horizon
N −1
J N (x, u) =  (x u (k ), u(k ))
k =0

Optimal Control Problem (OCP): to find a minimizing


control sequence
N −1
minimize J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0 u () u * (k )
subject to : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]
k t
Value Function: minimum of the cost function
VN (x) = min J N (x 0 , u) k + N −1
u

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• About of MPC1

1Lars Grüne, Nonlinear MPC, lecture notes (2013)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 26
• About of MPC1

• Can be generally applied to nonlinear MIMO systems.


• Natural consideration of both states and control constraints.
• Approximately optimal control.
• Used in industrial applications since the mid of 1970’s.
• Requires online optimization

1Lars Grüne, Nonlinear MPC, lecture notes (2013)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 26
• About of MPC1

• Can be generally applied to nonlinear MIMO systems.


• Natural consideration of both states and control constraints.
• Approximately optimal control.
• Used in industrial applications since the mid of 1970’s.
• Requires online optimization

• Central Issues related to MPC

1Lars Grüne, Nonlinear MPC, lecture notes (2013)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 26
• About of MPC1

• Can be generally applied to nonlinear MIMO systems.


• Natural consideration of both states and control constraints.
• Approximately optimal control.
• Used in industrial applications since the mid of 1970’s.
• Requires online optimization

• Central Issues related to MPC

• When does MPC stabilize the system?,


• How good is the performance of the MPC feedback law?,
• How long does the optimization horizon N need to be?,
• How to Implement it numerically? (The main scope of this TALK!).

1Lars Grüne, Nonlinear MPC, lecture notes (2013)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 26
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 27
• Considered System and Control Problem (Differential drive robots)

Fig. 1. (a) Differential drive robot kinematic.


(b) Pioneer 3-AT research platform

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)
system state vector in inertial frame:
x = x y 
T

Fig. 1. (a) Differential drive robot kinematic.


(b) Pioneer 3-AT research platform

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)
system state vector in inertial frame:
x = x y 
T

 x  ( )
 r + l cos   • Posture rate as a function
( )
 y  = r   +  sin   of the right and left wheels
  2 r 
( )
l

   r − l / D  speeds


 

Fig. 1. (a) Differential drive robot kinematic.


(b) Pioneer 3-AT research platform

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)
system state vector in inertial frame:
x = x y 
T

 x  ( )
 r + l cos   • Posture rate as a function
( )
 y  = r   +  sin   of the right and left wheels
  2 r 
( )
l

   r − l / D  speeds


 

(
 r  
 v   2 r + l )  • Linear and angular velocities
  =  r 
  
 2D
(
r − l ) of the robot

Fig. 1. (a) Differential drive robot kinematic.


(b) Pioneer 3-AT research platform

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)
system state vector in inertial frame:
x = x y 
T

 x  ( )
 r + l cos   • Posture rate as a function
( )
 y  = r   +  sin   of the right and left wheels
  2 r 
( )
l

   r − l / D  speeds


 

(
 r  
 v   2 r + l )  • Linear and angular velocities
  =  r 
  
 2D
(
r − l ) of the robot

 x  v cos   cos  0
 y  =  v sin   =  sin  v
     0  


      0 1  

• Pose as a function of robots linear velocity and Fig. 1. (a) Differential drive robot kinematic.
angular velocity (b) Pioneer 3-AT research platform

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)

From Input/output point of view, robot as a system can be viewed as

 x  v cos   cos  0
 y  =  v sin   =  sin  v
     0  
 
      0 1

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 29
• Considered System and Control Problem (Differential drive robots)

From Input/output point of view, robot as a system can be viewed as

 x  v cos   cos  0
 y  =  v sin   =  sin  v
     0  
 
      0 1

Linear velocity input


v x, x
y, y
  ,
Angular velocity input

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 29
• Considered System and Control Problem (Differential drive robots)

From Input/output point of view, robot as a system can be viewed as

 x  v cos   cos  0
 y  =  v sin   =  sin  v
     0  
 
      0 1

Linear velocity input


v x, x
y, y
  ,
Angular velocity input

Control Signals System States

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 29
• Considered System and Control Problem (Differential drive robots)

• Control objectives

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 30
• Considered System and Control Problem (Differential drive robots)

• Control objectives
point stabilization
Point stabilization
 xr (t )   xd  
 y (t ) =  y , t 
 r   d  
 r (t )   d  

• reference values of the state vector
are constant over the control period

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 30
• Considered System and Control Problem (Differential drive robots)

• Control objectives
point stabilization
Point stabilization
 xr (t )   xd  
 y (t ) =  y , t 
 r   d  
 r (t )   d  

• reference values of the state vector
are constant over the control period

Trajectory tracking
trajectory tracking

 xr (t )   xd (t ) 
 y (t ) =  y (t )
 r   d 
 r (t )   d (t ) 

• time varying reference values of


the state vector

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 30
• Control objectives

Control Signals System States

Error
+ Controller
MPC
-
Robot States
reference values

State
Sensors
Estimation

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 31
• Control objectives

Control Signals System States

Error
+ Controller
MPC
-
Robot States
reference values

State
Sensors
Estimation

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 31
• Model Predictive Control for (Differential drive robots – point stabilization)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t ))
 x  v cos  
 y  =  v sin  
   

    

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t )) x(k + 1) = f (x(k ), u(k ))
 x  v cos   Euler Discretization  x(k + 1)   x(k )  v(k ) cos  (k )
 y  =  v sin    y (k + 1) =  y (k ) + T  v(k ) sin  (k ) 
    Sampling Time (ΔT)      

      (k + 1)   (k )    (k ) 

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t )) x(k + 1) = f (x(k ), u(k ))
 x  v cos   Euler Discretization  x(k + 1)   x(k )  v(k ) cos  (k )
 y  =  v sin    y (k + 1) =  y (k ) + T  v(k ) sin  (k ) 
    Sampling Time (ΔT)      

      (k + 1)   (k )    (k ) 

MPC controller
2 2
Running (stage) Costs: (x, u) = x u − x ref + u − u ref
Q R

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t )) x(k + 1) = f (x(k ), u(k ))
 x  v cos   Euler Discretization  x(k + 1)   x(k )  v(k ) cos  (k )
 y  =  v sin    y (k + 1) =  y (k ) + T  v(k ) sin  (k ) 
    Sampling Time (ΔT)      

      (k + 1)   (k )    (k ) 

MPC controller
2 2
Running (stage) Costs: (x, u) = x u − x ref + u − u ref
Q R

Optimal Control Problem (OCP):


N −1
minimize J N (x 0 , u) =  (x u (k ), u(k ))
u admissible
k =0

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t )) x(k + 1) = f (x(k ), u(k ))
 x  v cos   Euler Discretization  x(k + 1)   x(k )  v(k ) cos  (k )
 y  =  v sin    y (k + 1) =  y (k ) + T  v(k ) sin  (k ) 
    Sampling Time (ΔT)      

      (k + 1)   (k )    (k ) 

MPC controller
2 2
Running (stage) Costs: (x, u) = x u − x ref + u − u ref
Q R

Optimal Control Problem (OCP):


N −1
minimize J N (x 0 , u) =  (x u (k ), u(k ))
u admissible
k =0
subject to : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• OCP and NLP

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem
N −1
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0

s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem
N −1
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0

s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

Nonlinear Programming Problem (NLP) : A standard problem formulation in


numerical optimization having the general form

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem
N −1
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0

s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

Nonlinear Programming Problem (NLP) : A standard problem formulation in


numerical optimization having the general form

min  (w ) Objective function


w

s.t. g1 (w )  0 , Inequality constraints


g 2 (w ) = 0, Equality constraints

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem
N −1
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0

s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

Nonlinear Programming Problem (NLP) : A standard problem formulation in


numerical optimization having the general form
Many NLP optimization algorithms
min  (w ) Objective function (packages): e.g.
w

s.t. g1 (w )  0 , Inequality constraints Ipopt


g 2 (w ) = 0, Equality constraints fmincon

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP1

OCP NLP
N −1
min  (w )
min J N (x 0 , u) =  (x u (k ), u(k )) w
u
k =0
s.t. g1 (w )  0 ,
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 , g 2 (w ) = 0,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

1Joel Andersson (2015)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 34
• OCP and NLP1

Shooting Methods:
1- Single Shooting
OCP 2- Multiple shooting NLP
N −1
3- ….. min  (w )
min J N (x 0 , u) =  (x u (k ), u(k )) w
u
k =0
s.t. g1 (w )  0 ,
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 , g 2 (w ) = 0,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

1Joel Andersson (2015)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 34
CasADi1 https://web.casadi.org/get/

Build efficient optimal control software, with minimal effort.

• Has a general scope of Numerical optimization.


• In particular, it facilitates the solution of NLP’s
• Facilitates, not actually solves the NLP’s
• Solver\plugins" can be added post-installation.
• Free & open-source (LGPL), also for commercial use.
• Project started in December 2009, now at version 3.4.5 (August, 2018)
• 4 standard problems can be handled by CasADi
• QP’s (Quadratic programs)
• NLP’s (Nonlinear programs)
• Root finding problems
• Initial-value problems in ODE/DAE

1Joel Andersson, introduction to casadi, 2015

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 35
• Single Shooting Implementation using CaSAdi.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
import casadi.*

T = 0.2; % sampling time [s]


N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states); 𝑥ሶ 𝑣 cos 𝜃
𝑦ሶ = 𝑣 sin 𝜃
v = SX.sym('v'); omega = SX.sym('omega'); 𝜃ሶ 𝜔
controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial and the reference state of the robot)

X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
>> f.print_dimensions
N = 3; % prediction horizon
Number of inputs: 2
rob_diam = 0.3;
Input 0 ("i0"): 3x1
Input 1 ("i1"): 2x1
v_max = 0.6; v_min = -v_max;
Number of outputs: 1
omega_max = pi/4; omega_min = -omega_max;
Output 0 ("o0"): 3x1
x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');
states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial and the reference state of the robot)

X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial and the reference state of the robot)

X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon  u 0   v0 0 
U T =    =    
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;
u N −1  v N −1  N −1 
x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');
>> U
states = [x;y;theta]; n_states = length(states);
U =
v = SX.sym('v'); omega = SX.sym('omega'); [[U_0, U_2, U_4],
[U_1, U_3, U_5]]
controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial and the reference state of the robot)

X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial and the reference state of the robot)

X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3; >> P
P =
v_max = 0.6; v_min = -v_max; [P_0, P_1, P_2, P_3, P_4, P_5]
omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial and the reference state of the robot)

X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial and the reference state of the robot)

X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon  x0 y0 0 
X T =    
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;
omega_max = pi/4; omega_min = -omega_max;  x N yN  N 
x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');
>> X
states = [x;y;theta]; n_states = length(states);
X =
[[X_0, X_3, X_6, X_9],
v = SX.sym('v'); omega = SX.sym('omega');
[X_1, X_4, X_7, X_10],
controls = [v;omega]; n_controls = length(controls);
[X_2, X_5, X_8, X_11]]
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial and the reference state of the robot)

X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial and the reference state of the robot)

X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
% compute solution symbolically
X(:,1) = P(1:3); % initial state
for k = 1:N
st = X(:,k); con = U(:,k);
f_value = f(st,con);
st_next = st+ (T*f_value);
X(:,k+1) = st_next;
end
% this function to get the optimal trajectory knowing the optimal solution
ff=Function('ff',{U,P},{X});

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically
X(:,1) = P(1:3); % initial state >> P
for k = 1:N P =
st = X(:,k); con = U(:,k); [P_0, P_1, P_2, P_3, P_4, P_5]
f_value = f(st,con);
st_next = st+ (T*f_value);
X(:,k+1) = st_next;
end
% this function to get the optimal trajectory knowing the optimal solution
ff=Function('ff',{U,P},{X});

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically
X(:,1) = P(1:3); % initial state
for k = 1:N
st = X(:,k); con = U(:,k);
f_value = f(st,con);
st_next = st+ (T*f_value);
X(:,k+1) = st_next;
end
% this function to get the optimal trajectory knowing the optimal solution
ff=Function('ff',{U,P},{X});

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically
X(:,1) = P(1:3); % initial state 0 0x 0 y  
X =    
for k = 1:N T
st = X(:,k); con = U(:,k); 
f_value = f(st,con);
st_next = st+ (T*f_value); N N
 xN y  
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
[X_2, X_5, X_8, X_11]]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically
X(:,1) = P(1:3); % initial state
for k = 1:N
st = X(:,k); con = U(:,k);
f_value = f(st,con);
st_next = st+ (T*f_value);
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
[X_2, X_5, X_8, X_11]]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically  x0 
X(:,1) = P(1:3); % initial state  f ( x , u ) 
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0 
f_value = f(st,con);
  
 
st_next = st+ (T*f_value);
 N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1 
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
[X_2, X_5, X_8, X_11]]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically  x0 
X(:,1) = P(1:3); % initial state  f ( x , u ) 
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0 
f_value = f(st,con);
  
 
st_next = st+ (T*f_value);
 N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1 
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
obj = 0; % Objective function [X_2, X_5, X_8, X_11]]
g = []; % constraints vector

Q = zeros(3,3); Q(1,1) = 1;Q(2,2) = 5;Q(3,3) = 0.1; % weighing matrices (states)


R = zeros(2,2); R(1,1) = 0.5; R(2,2) = 0.05; % weighing matrices (controls)
% compute objective
for k=1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
end

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically  x0 
X(:,1) = P(1:3); % initial state  f ( x , u ) 
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0 
f_value = f(st,con);
  
 
st_next = st+ (T*f_value);
 N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1 
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
obj = 0; % Objective function [X_2, X_5, X_8, X_11]]
g = []; % constraints vector
>> P
P =
Q = zeros(3,3); Q(1,1) = 1;Q(2,2) = 5;Q(3,3) = 0.1; % weighing matrices
[P_0, P_1, (states)
P_2, P_3, P_4, P_5]
R = zeros(2,2); R(1,1) = 0.5; R(2,2) = 0.05; % weighing matrices (controls)
% compute objective 2 2
for k=1:N (x, u) = x u − x r + u − u r
Q R
st = X(:,k); con = U(:,k);
−1
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate Nobj
end 
J N (x, u) = (x u (k ), u(k ))
k =0

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically  x0 
X(:,1) = P(1:3); % initial state  f ( x , u ) 
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0 
f_value = f(st,con);
  
 
st_next = st+ (T*f_value);
 N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1 
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
obj = 0; % Objective function [X_2, X_5, X_8, X_11]]
g = []; % constraints vector

Q = zeros(3,3); Q(1,1) = 1;Q(2,2) = 5;Q(3,3) = 0.1; % weighing matrices (states)


R = zeros(2,2); R(1,1) = 0.5; R(2,2) = 0.05; % weighing matrices (controls)
% compute objective
for k=1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
end

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically  x0 
X(:,1) = P(1:3); % initial state  f ( x , u ) 
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0 
f_value = f(st,con);
  
 
st_next = st+ (T*f_value);
 N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1 
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
obj = 0; % Objective function [X_2, X_5, X_8, X_11]]
g = []; % constraints vector

Q = zeros(3,3); Q(1,1) = 1;Q(2,2) = 5;Q(3,3) = 0.1; % weighing matrices (states)


R = zeros(2,2); R(1,1) = 0.5; R(2,2) = 0.05; % weighing matrices (controls)
% compute objective
for k=1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
end

% compute constraints
for k = 1:N+1 % box constraints due to the map margins  x0 y0 0 
X T =    
g = [g ; X(1,k)]; %state x
g = [g ; X(2,k)]; %state y 
end
 x N yN  N 
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% make the decision variables one column vector
OPT_variables = reshape(U,2*N,1);
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

opts = struct;
opts.ipopt.max_iter = 100;
opts.ipopt.print_level =0;%0,3
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
opts.ipopt.acceptable_obj_change_tol = 1e-6;

solver = nlpsol('solver', 'ipopt', nlp_prob,opts);

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 38
% make the decision variables one column vector
OPT_variables = reshape(U,2*N,1);
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);
>> U
opts = struct; U =
opts.ipopt.max_iter = 100; [[U_0, U_2, U_4],
opts.ipopt.print_level =0;%0,3 [U_1, U_3, U_5]]
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
>> OPT_variables
opts.ipopt.acceptable_obj_change_tol = 1e-6;
OPT_variables =
[U_0, U_1, U_2, U_3, U_4, U_5]
solver = nlpsol('solver', 'ipopt', nlp_prob,opts);

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 38
% make the decision variables one column vector
OPT_variables = reshape(U,2*N,1);
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);
>> U
opts = struct; U =
opts.ipopt.max_iter = 100; [[U_0, U_2, U_4],
opts.ipopt.print_level =0;%0,3 [U_1, U_3, U_5]]
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
>> OPT_variables
opts.ipopt.acceptable_obj_change_tol = 1e-6;
OPT_variables =
[U_0, U_1, U_2, U_3, U_4, U_5]
solver = nlpsol('solver', 'ipopt', nlp_prob,opts);
>> nlp_prob
nlp_prob =
struct with fields:
f: [1×1 casadi.SX]
x: [6×1 casadi.SX]
g: [8×1 casadi.SX]
p: [6×1 casadi.SX]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 38
% make the decision variables one column vector
OPT_variables = reshape(U,2*N,1);
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);
>> U
opts = struct; U =
opts.ipopt.max_iter = 100; [[U_0, U_2, U_4],
opts.ipopt.print_level =0;%0,3 [U_1, U_3, U_5]]
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
>> OPT_variables
opts.ipopt.acceptable_obj_change_tol = 1e-6;
OPT_variables =
[U_0, U_1, U_2, U_3, U_4, U_5]
solver = nlpsol('solver', 'ipopt', nlp_prob,opts);
>> nlp_prob YI
nlp_prob =
struct with fields: 2
f: [1×1 casadi.SX]
x: [6×1 casadi.SX]
g: [8×1 casadi.SX]
p: [6×1 casadi.SX] (x, y)
args = struct; -2 O 2 XI
% inequality constraints (state constraints)
args.lbg = -2; % lower bound of the states x and y
args.ubg = 2; % upper bound of the states x and y
-2
% input constraints
args.lbx(1:2:2*N-1,1) = v_min; args.lbx(2:2:2*N,1) = omega_min;
args.ubx(1:2:2*N-1,1) = v_max; args.ubx(2:2:2*N,1) = omega_max;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 38
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)

end;

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)


args.p = [x0;xs]; % set the values of the parameters vector
args.x0 = reshape(u0',2*N,1); % initial value of the optimization variables
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);

end;

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)


args.p = [x0;xs]; % set the values of the parameters vector
args.x0 = reshape(u0',2*N,1); % initial value of the optimization variables
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x)',2,N)';
ff_value = ff(u',args.p); % compute OPTIMAL solution TRAJECTORY
xx1(:,1:3,mpciter+1)= full(ff_value)';
u_cl= [u_cl ; u(1,:)];

end;

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)


args.p = [x0;xs]; % set the values of the parameters vector
args.x0 = reshape(u0',2*N,1); % initial value of the optimization variables
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x)',2,N)';
ff_value = ff(u',args.p); % compute OPTIMAL solution TRAJECTORY
xx1(:,1:3,mpciter+1)= full(ff_value)';
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
mpciter = mpciter + 1;
end;

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0; function [t0, x0, u0] = shift(T, t0, x0, u,f)
x0 = [0 ; 0 ; 0.0]; % initial condition.
st = x0;
xs = [1.5 ; 1.5 ; 0.0]; % Reference con
posture.
= u(1,:)';
xx(:,1) = x0; % xx contains the history of states
f_value = f(st,con);
t(1) = t0; st = st+ (T*f_value);
u0 = zeros(N,2); % two control inputs
x0 = full(st);
sim_tim = 20; % Maximum simulation time
% Start MPC t0 = t0 + T;
mpciter = 0; u0 = [u(2:size(u,1),:);u(size(u,1),:)];
xx1 = []; end
u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)


args.p = [x0;xs]; % set the values of the parameters vector
args.x0 = reshape(u0',2*N,1); % initial value of the optimization variables
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x)',2,N)';
ff_value = ff(u',args.p); % compute OPTIMAL solution TRAJECTORY
xx1(:,1:3,mpciter+1)= full(ff_value)';
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
mpciter = mpciter + 1;
end;

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)


args.p = [x0;xs]; % set the values of the parameters vector
args.x0 = reshape(u0',2*N,1); % initial value of the optimization variables
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x)',2,N)';
ff_value = ff(u',args.p); % compute OPTIMAL solution TRAJECTORY
xx1(:,1:3,mpciter+1)= full(ff_value)';
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
mpciter = mpciter + 1;
end;

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)


args.p = [x0;xs]; % set the values of the parameters vector
args.x0 = reshape(u0',2*N,1); % initial value of the optimization variables
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x)',2,N)';
ff_value = ff(u',args.p); % compute OPTIMAL solution TRAJECTORY
xx1(:,1:3,mpciter+1)= full(ff_value)';
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
mpciter = mpciter + 1;
end;
Draw_MPC_point_stabilization_v1 (t,xx,xx1,u_cl,xs,N,rob_diam)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
MATLAB DEMO with different
references and prediction
horizons

xs = [1.5 ; 1.5 ; 0]; % Reference posture.


T = 0.2; % [sec]
N = 10; % prediction horizon

xs = [1.5 ; 1.5 ; pi]; % Reference posture.


T = 0.2; % [sec]
N = 10; % prediction horizon

xs = [1.5 ; 1.5 ; 0]; % Reference posture.


T = 0.2; % [sec]
N = 25; % prediction horizon

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 40
• We Did Single shooting to transform the OCP into an NLP1

1JoelAndersson, introduction to casadi, 2015


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 41
• We Did Single shooting to transform the OCP into an NLP1
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0

s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

1JoelAndersson, introduction to casadi, 2015


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 41
• We Did Single shooting to transform the OCP into an NLP1
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0

s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

1JoelAndersson, introduction to casadi, 2015


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 41
• We Did Single shooting to transform the OCP into an NLP1
OCP NLP
N −1 Problem Decision variables
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0
w = u 0  u N −1 
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

1JoelAndersson, introduction to casadi, 2015


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 41
• We Did Single shooting to transform the OCP into an NLP1
OCP NLP
N −1 Problem Decision variables
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0
w = u 0  u N −1 
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
Get x u (.) as a function of
x u (0) = x 0 ,
w, x 0 , and t k
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ] x u (.) = F(w, x 0 , t k )
F(w, x 0 , t0 ) = x 0

1JoelAndersson, introduction to casadi, 2015


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 41
• We Did Single shooting to transform the OCP into an NLP1
OCP NLP
N −1 Problem Decision variables
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0
w = u 0  u N −1 
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
Get x u (.) as a function of
x u (0) = x 0 ,
w, x 0 , and t k
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ] x u (.) = F(w, x 0 , t k )
F(w, x 0 , t0 ) = x 0

Then Solve the NLP

min  (F(w, x 0 , t k ), w )
w

s.t. g1 (F(w, x 0 , t k ), w )  0,

Inequality constraints (e.g. Map Margins,


and control limits)
Equality constraints (not used here)
1JoelAndersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 41
• We Did Single shooting to transform the OCP into an NLP1
OCP NLP
N −1 Problem Decision variables
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0
w = u 0  u N −1 
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
Get x u (.) as a function of
x u (0) = x 0 ,
w, x 0 , and t k
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ] x u (.) = F(w, x 0 , t k )
• MAIN Drawback F(w, x 0 , t0 ) = x 0
Nonlinearity propagation: integrator
function tends to become highly Then Solve the NLP
nonlinear for large N. (Remember?)
min  (F(w, x 0 , t k ), w )
w
Not a suitable method for nonlinear
and/or unstable systems when optimizing s.t. g1 (F(w, x 0 , t k ), w )  0,
over a long prediction horizon.
Inequality constraints (e.g. Map Margins,
and control limits)
Equality constraints (not used here)
1JoelAndersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 41
• We Did Single shooting to transform the OCP into an NLP1
OCP NLP
N −1 Problem Decision variables
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0
w = u 0  u N −1 
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
Get x u (.) as a function of
x u (0) = x 0 ,
w, x 0 , and t k
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ] x u (.) = F(w, x 0 , t k )
• MAIN Drawback F(w, x 0 , t0 ) = x 0
Nonlinearity propagation: integrator
function tends to become highly Then Solve the NLP
nonlinear for large N. (Remember?)
 x0  min  (F(w, x 0 , t k ), w )
w
Not a suitable method for nonlinear 
f ( x , u ) s.t. g1 (F(w, x 0 , t k ), w )  0,
=  unstable systems
X and/or
T 0 0
when optimizing
overa long prediction horizon.  
  Inequality constraints (e.g. Map Margins,
 f ( x N −1 , u N −1 ) = f (f ( x N − 2 , u N − 2 ), u N −1 )  and control limits)
Equality constraints (not used here)
1JoelAndersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 41
• We Did Single shooting to transform the OCP into an NLP1
OCP NLP
N −1 Problem Decision variables
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0
w = u 0  u N −1 
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
Get x u (.) as a function of
x u (0) = x 0 ,
w, x 0 , and t k
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ] x u (.) = F(w, x 0 , t k )
• MAIN Drawback F(w, x 0 , t0 ) = x 0
Nonlinearity propagation: integrator
function tends to become highly Then Solve the NLP NLP is discretized
nonlinear for large N. (Remember?) only in control u
min  (F(w, x 0 , t k ), w )
w
Not a suitable method for nonlinear
and/or unstable systems when optimizing s.t. g1 (F(w, x 0 , t k ), w )  0,
over a long prediction horizon.
Inequality constraints (e.g. Map Margins,
and control limits)
Equality constraints (not used here)
1JoelAndersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 41
• Multiple Shooting to transform the OCP into an NLP1

1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k ))
u
k =0

s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k )  U , k  [0, N − 1]
x u (k )  X , k  [0, N ]

1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k )  U , k  [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k )  X , k  [0, N ]

1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k )  U , k  [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k )  X , k  [0, N ]
NLP
w = u 0  u N −1 , x 0  xN  Problem Decision variables

1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k )  U , k  [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k )  X , k  [0, N ]
NLP
w = u 0  u N −1 , x 0  xN  Problem Decision variables
min  (w )
w

1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k )  U , k  [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k )  X , k  [0, N ]
NLP
w = u 0  u N −1 , x 0  xN  Problem Decision variables
min  (w )
w  g1 (x 0 , u 0 )   x0 − x0 
    f (x , u ) − x 
subject to : g1 (w ) =    0, g (w ) =  0 0 1 =0
 g1 (x N −1 , u N −1 ) 2
  
   
 g 1 ( x N )   N −1 N −1
f ( x , u ) − x N

1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k )  U , k  [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k )  X , k  [0, N ]
NLP
w = u 0  u N −1 , x 0  xN  Problem Decision variables
min  (w )
w  g1 (x 0 , u 0 )   x0 − x0 
    f (x , u ) − x 
subject to : g1 (w ) =    0, g (w ) =  0 0 1 =0
 g1 (x N −1 , u N −1 ) 2
  
   
 g 1 ( x N )   N −1 N −1
f ( x , u ) − x N

Inequality constraints (e.g. Map


Margins, and control limits)
1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) =  (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k )  U , k  [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k )  X , k  [0, N ]
NLP
w = u 0  u N −1 , x 0  xN  Problem Decision variables
min  (w )
w  g1 (x 0 , u 0 )   x0 − x0 
    f (x , u ) − x 
subject to : g1 (w ) =    0, g (w ) =  0 0 1 =0
 g1 (x N −1 , u N −1 ) 2
  
   
 g 1 ( x N )   N −1 N −1
f ( x , u ) − x N

Inequality constraints (e.g. Map Equality constraints


Margins, and control limits) (system dynamics)
1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.

1Joel Andersson, introduction to casadi, 2015

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 43
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.

• Multiple-Shooting is a Lifted Single-Shooting:


Lifting: reformulate a function with more variables so as to make it
less nonlinear
• Multiple shooting method is superior to the single shooting since
"lifting" the problem to a higher dimension is known to improve
convergence.
• The user is also able to initialize with a known guess for the state
trajectory.
• The drawback is that the NLP solved gets much larger, although this is
often compensated by the fact that it is also much sparser

1Joel Andersson, introduction to casadi, 2015

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 43
MATLAB Code using CasADi package (Multiple Shooting)
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

T = 0.2; %[s]
N = 3; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include the initial state and the reference state)
X = SX.sym('X',n_states,(N+1));
% A vector that represents the states over the optimization problem.
obj = 0; % Objective function
g = []; % constraints vector
Q = zeros(3,3); Q(1,1) = 1;Q(2,2) = 5;Q(3,3) = 0.1; % weighing matrices (states)
R = zeros(2,2); R(1,1) = 0.5; R(2,2) = 0.05; % weighing matrices (controls)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 44
st = X(:,1); % initial state
g = [g;st-P(1:3)]; % initial condition constraints
for k = 1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end

2 2  x0 − x0 
(x, u) = x u − x r + u − ur  f (x , u ) − x 
Q R
N −1 g 2 (w ) =  0 0 1 =0
J N (x, u) =  (x u (k ), u(k ))   
 
k =0
 N −1 N −1
f ( x , u ) − x N

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 45
st = X(:,1); % initial state
g = [g;st-P(1:3)]; % initial condition constraints
for k = 1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end

2 2  x0 − x0 
(x, u) = x u − x r + u − ur  f (x , u ) − x 
Q R
N −1 g 2 (w ) =  0 0 1 =0
J N (x, u) =  (x u (k ), u(k ))   
 
k =0
 N −1 N −1
f ( x , u ) − x N

% make the decision variable one column vector


OPT_variables = [reshape(X,3*(N+1),1);reshape(U,2*N,1)];

nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);


opts = struct;
opts.ipopt.max_iter = 100;
opts.ipopt.print_level =0;%0,3
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
opts.ipopt.acceptable_obj_change_tol = 1e-6;
solver = nlpsol('solver', 'ipopt', nlp_prob,opts);

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 45
args = struct; YI

2
args.lbg(1:3*(N+1)) = 0; % -1e-20 % Equality constraints
args.ubg(1:3*(N+1)) = 0; % 1e-20 % Equality constraints

args.lbx(1:3:3*(N+1),1) = -2; %state x lower bound


(x, y)
args.ubx(1:3:3*(N+1),1) = 2; %state x upper bound 2
args.lbx(2:3:3*(N+1),1) = -2; %state y lower bound -2 O XI
args.ubx(2:3:3*(N+1),1) = 2; %state y upper bound
args.lbx(3:3:3*(N+1),1) = -inf; %state theta lower bound
args.ubx(3:3:3*(N+1),1) = inf; %state theta upper bound
-2
args.lbx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_min; %v lower bound
args.ubx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_max; %v upper bound
args.lbx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_min; %omega lower bound
args.ubx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_max; %omega upper bound

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 46
args = struct; YI

2
args.lbg(1:3*(N+1)) = 0; % -1e-20 % Equality constraints
args.ubg(1:3*(N+1)) = 0; % 1e-20 % Equality constraints

args.lbx(1:3:3*(N+1),1) = -2; %state x lower bound


(x, y)
args.ubx(1:3:3*(N+1),1) = 2; %state x upper bound 2
args.lbx(2:3:3*(N+1),1) = -2; %state y lower bound -2 O XI
args.ubx(2:3:3*(N+1),1) = 2; %state y upper bound
args.lbx(3:3:3*(N+1),1) = -inf; %state theta lower bound
args.ubx(3:3:3*(N+1),1) = inf; %state theta upper bound
-2
args.lbx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_min; %v lower bound
args.ubx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_max; %v upper bound
args.lbx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_min; %omega lower bound
args.ubx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_max; %omega upper bound

% THE SIMULATION LOOP


%---------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.

xx(:,1) = x0; % xx contains the history of states


t(1) = t0;
u0 = zeros(N,2); % two control inputs for each robot
X0 = repmat(x0,1,N+1)'; % initialization of the states decision variables

sim_tim = 20; % Maximum simulation time

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 46
% Start MPC
mpciter = 0; xx1 = []; u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)


args.p = [x0;xs]; % set the values of the parameters vector
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N+1),1);reshape(u0',2*N,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x(3*(N+1)+1:end))',2,N)';
% get controls only from the solution
xx1(:,1:3,mpciter+1)= reshape(full(sol.x(1:3*(N+1)))',3,N+1)';
% get solution TRAJECTORY
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
% Apply the control and shift the solution
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
X0 = reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get solution TRAJECTORY
% Shift trajectory to initialize the next step
X0 = [X0(2:end,:);X0(end,:)];
mpciter
mpciter = mpciter + 1;
end;
Draw_MPC_point_stabilization_v1 (t,xx,xx1,u_cl,xs,N,rob_diam)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 47
% Start MPC
mpciter = 0; xx1 = []; u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)


args.p = [x0;xs]; % set the values of the parameters vector
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N+1),1);reshape(u0',2*N,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x(3*(N+1)+1:end))',2,N)';
% get controls only from the solution
xx1(:,1:3,mpciter+1)= reshape(full(sol.x(1:3*(N+1)))',3,N+1)';
% get solution TRAJECTORY
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
% Apply the control and shift the solution
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
X0 = reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get solution TRAJECTORY
% Shift trajectory to initialize the next step
X0 = [X0(2:end,:);X0(end,:)];
mpciter
mpciter = mpciter + 1;
end;
Draw_MPC_point_stabilization_v1 (t,xx,xx1,u_cl,xs,N,rob_diam)

MATLAB demo with large N


Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 47
• Adding an obstacle as a path constraint

YI

(x, y)

O XI

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint

YI

(xo, yo)

(x, y)

O XI

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint

We would like to maintain lower bound


for the Euclidean distance between the
prediction of the robot position and the
obstacle position. Therefore, we need to YI
impose the following path constraints

(xo, yo)

(x, y)

O XI

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint

We would like to maintain lower bound


for the Euclidean distance between the
prediction of the robot position and the
obstacle position. Therefore, we need to YI
impose the following path constraints

(xo, yo)

(x, y)

O XI

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint

We would like to maintain lower bound


for the Euclidean distance between the
prediction of the robot position and the
obstacle position. Therefore, we need to YI
impose the following path constraints

(x − xo )2 + ( y − yo )2  rr + ro

(x − xo )2 + ( y − yo )2 − (r + ro )  0
(xo, yo)
− (x − xo )2
+ ( y − yo ) + (r + ro )  0
2

(x, y)

O XI

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint

We would like to maintain lower bound


for the Euclidean distance between the
prediction of the robot position and the
obstacle position. Therefore, we need to YI
impose the following path constraints

(x − xo )2 + ( y − yo )2  rr + ro

(x − xo )2 + ( y − yo )2 − (r + ro )  0
(xo, yo)
− (x − xo )2
+ ( y − yo ) + (r + ro )  0
2

Then, we need to ensure the above


inequality constraint for all prediction
steps. (x, y)

O XI

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
MATLAB Code using CasADi package
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*

T = 0.2; %[s]
N = 14; % prediction horizon
rob_diam = 0.3;

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);

v = SX.sym('v'); omega = SX.sym('omega');


controls = [v;omega]; n_controls = length(controls);
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s

f = Function('f',{states,controls},{rhs}); % nonlinear mapping function f(x,u)


U = SX.sym('U',n_controls,N); % Decision variables (controls)
P = SX.sym('P',n_states + n_states);
% parameters (which include at the initial state of the robot and the reference state)

X = SX.sym('X',n_states,(N+1));
% A vector that represents the states over the optimization problem.

obj = 0; % Objective function


g = []; % constraints vector
Q = zeros(3,3); Q(1,1) = 1;Q(2,2) = 5;Q(3,3) = 0.1; % weighing matrices (states)
R = zeros(2,2); R(1,1) = 0.5; R(2,2) = 0.05; % weighing matrices (controls)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 49
st = X(:,1); % initial state
g = [g;st-P(1:3)]; % initial condition constraints
for k = 1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 50
st = X(:,1); % initial state
g = [g;st-P(1:3)]; % initial condition constraints
for k = 1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end
% Add constraints for collision avoidance
obs_x = 0.5; % meters
obs_y = 0.5; % meters ( ) (
− x − xo + y − yo + r + ro  0
2 2
) ( )
obs_diam = 0.3; % meters
for k = 1:N+1 % box constraints due to the map margins
g = [g ; -sqrt((X(1,k)-obs_x)^2+(X(2,k)-obs_y)^2) + (rob_diam/2 + obs_diam/2)];
end

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 50
st = X(:,1); % initial state
g = [g;st-P(1:3)]; % initial condition constraints
for k = 1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end
% Add constraints for collision avoidance
obs_x = 0.5; % meters
obs_y = 0.5; % meters ( ) (
− x − xo + y − yo + r + ro  0
2 2
) ( )
obs_diam = 0.3; % meters
for k = 1:N+1 % box constraints due to the map margins
g = [g ; -sqrt((X(1,k)-obs_x)^2+(X(2,k)-obs_y)^2) + (rob_diam/2 + obs_diam/2)];
end
% make the decision variable one column vector
OPT_variables = [reshape(X,3*(N+1),1);reshape(U,2*N,1)];
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

opts = struct;
opts.ipopt.max_iter = 100;
opts.ipopt.print_level =0;%0,3
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
opts.ipopt.acceptable_obj_change_tol = 1e-6;
solver = nlpsol('solver', 'ipopt', nlp_prob,opts);
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 50
args = struct;
args.lbg(1:3*(N+1)) = 0; % equality constraints
args.ubg(1:3*(N+1)) = 0; % equality constraints

args.lbg(3*(N+1)+1 : 3*(N+1)+ (N+1)) = -inf; % inequality constraints


args.ubg(3*(N+1)+1 : 3*(N+1)+ (N+1)) = 0; % inequality constraints

args.lbx(1:3:3*(N+1),1) = -2; %state x lower bound


args.ubx(1:3:3*(N+1),1) = 2; %state x upper bound
args.lbx(2:3:3*(N+1),1) = -2; %state y lower bound
args.ubx(2:3:3*(N+1),1) = 2; %state y upper bound
args.lbx(3:3:3*(N+1),1) = -inf; %state theta lower bound
args.ubx(3:3:3*(N+1),1) = inf; %state theta upper bound

args.lbx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_min; %v lower bound


args.ubx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_max; %v upper bound
args.lbx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_min; %omega lower bound
args.ubx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_max; %omega upper bound

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 51
args = struct;
args.lbg(1:3*(N+1)) = 0; % equality constraints − (x − xo )2 + ( y − yo )2 + (r + ro )  0
args.ubg(1:3*(N+1)) = 0; % equality constraints

args.lbg(3*(N+1)+1 : 3*(N+1)+ (N+1)) = -inf; % inequality constraints


args.ubg(3*(N+1)+1 : 3*(N+1)+ (N+1)) = 0; % inequality constraints

args.lbx(1:3:3*(N+1),1) = -2; %state x lower bound


args.ubx(1:3:3*(N+1),1) = 2; %state x upper bound
args.lbx(2:3:3*(N+1),1) = -2; %state y lower bound
args.ubx(2:3:3*(N+1),1) = 2; %state y upper bound
args.lbx(3:3:3*(N+1),1) = -inf; %state theta lower bound
args.ubx(3:3:3*(N+1),1) = inf; %state theta upper bound

args.lbx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_min; %v lower bound


args.ubx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_max; %v upper bound
args.lbx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_min; %omega lower bound
args.ubx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_max; %omega upper bound

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 51
args = struct;
args.lbg(1:3*(N+1)) = 0; % equality constraints − (x − xo )2 + ( y − yo )2 + (r + ro )  0
args.ubg(1:3*(N+1)) = 0; % equality constraints

args.lbg(3*(N+1)+1 : 3*(N+1)+ (N+1)) = -inf; % inequality constraints


args.ubg(3*(N+1)+1 : 3*(N+1)+ (N+1)) = 0; % inequality constraints

args.lbx(1:3:3*(N+1),1) = -2; %state x lower bound


args.ubx(1:3:3*(N+1),1) = 2; %state x upper bound
args.lbx(2:3:3*(N+1),1) = -2; %state y lower bound
args.ubx(2:3:3*(N+1),1) = 2; %state y upper bound
args.lbx(3:3:3*(N+1),1) = -inf; %state theta lower bound
args.ubx(3:3:3*(N+1),1) = inf; %state theta upper bound

args.lbx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_min; %v lower bound


args.ubx(3*(N+1)+1:2:3*(N+1)+2*N,1) = v_max; %v upper bound
args.lbx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_min; %omega lower bound
args.ubx(3*(N+1)+2:2:3*(N+1)+2*N,1) = omega_max; %omega upper bound

% THE SIMULATION LOOP STARTS HERE


%--------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs for each robot
X0 = repmat(x0,1,N+1)'; % initialization of the states decision variables
sim_tim = 20; % Maximum simulation time

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 51
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];

while(norm((x0-xs),2) > 1e-2 && mpciter < sim_tim / T)


args.p = [x0;xs]; % set the values of the parameters vector
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N+1),1);reshape(u0',2*N,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x(3*(N+1)+1:end))',2,N)'; % get controls only from the solution
xx1(:,1:3,mpciter+1)= reshape(full(sol.x(1:3*(N+1)))',3,N+1)';
% get solution TRAJECTORY

u_cl= [u_cl ; u(1,:)];


t(mpciter+1) = t0;

% Apply the control and shift the solution


[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;

X0 = reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get solution TRAJECTORY

% Shift trajectory to initialize the next step


X0 = [X0(2:end,:);X0(end,:)];
mpciter = mpciter + 1;
end;
Draw_MPC_PS_Obstacles (t,xx,xx1,u_cl,xs,N,rob_diam,obs_x,obs_y,obs_diam)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 52
MATLAB DEMO

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 53
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 54
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 55
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

In control theory, a state observer (estimator) is a


system that provides an estimate of the internal state
of a given real system, from measurements of the
input and output of the real system. It is typically
computer-implemented, and provides the basis of
many practical applications.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 55
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

In control theory, a state observer (estimator) is a


system that provides an estimate of the internal state
of a given real system, from measurements of the
input and output of the real system. It is typically
computer-implemented, and provides the basis of
many practical applications.

Moving horizon estimation (MHE) is an optimization approach that uses a


series of measurements observed over time, containing noise (random
variations) and other inaccuracies, and produces estimates of unknown
variables or parameters. Unlike deterministic approaches like the Kalman
filter, MHE requires an iterative approach that relies on linear programming
or nonlinear programming solvers to find a solution.[1]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 55
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Disturbed SISO simple example

x(k + 1) = f ( x(k ), u (k )) + x
y (k ) = h( x(k )) + y = x(k ) + y

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Disturbed SISO simple example

x(k + 1) = f ( x(k ), u (k )) + x
y (k ) = h( x(k )) + y = x(k ) + y

• At a certain time step k, acquire past


measurements over a window of size NMHE.

• Find the best state trajectory that best fit the


considered window of measurements.
• This is done by minimizing the following cost
function.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Disturbed SISO simple example

x(k + 1) = f ( x(k ), u (k )) + x measurement

y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.

• Find the best state trajectory that best fit the


considered window of measurements.
• This is done by minimizing the following cost
function. Odometery
(inputs)

k t

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Disturbed SISO simple example

x(k + 1) = f ( x(k ), u (k )) + x measurement

y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.

• Find the best state trajectory that best fit the


considered window of measurements.
• This is done by minimizing the following cost
function. Odometery
(inputs)

k t

k − N MHE

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Disturbed SISO simple example


MHE Estimate
x(k + 1) = f ( x(k ), u (k )) + x measurement

y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.

• Find the best state trajectory that best fit the


considered window of measurements.
• This is done by minimizing the following cost
function. Odometery
(inputs)

k t

k − N MHE

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Disturbed SISO simple example


MHE Estimate
x(k + 1) = f ( x(k ), u (k )) + x measurement

y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.

• Find the best state trajectory that best fit the


considered window of measurements.
• This is done by minimizing the following cost
function. Odometery
k k −1 (inputs)
J N MHE (x, u) =  ~
y (i ) − h(x(i )) +  ~ (i ) − u(i )
2 2
V
u W
i = k − N MHE i = k − N MHE

min J N MHE (x, u) k t


x ,u

s.t. : x u (i + 1) = f (x u (i ), u(i )) k − N MHE


u(i )  U , i  [k − N MHE , k − 1]
x u (i )  X , i  [k − N MHE , k ]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Disturbed SISO simple example


MHE Estimate
x(k + 1) = f ( x(k ), u (k )) + x measurement

y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.

• Find the best state trajectory that best fit the


considered window of measurements.
• This is done by minimizing the following cost
function. Odometery
k k −1 (inputs)
J N MHE (x, u) =  ~
y (i ) − h(x(i )) +  ~ (i ) − u(i )
2 2
V
u W
i = k − N MHE i = k − N MHE

min J N MHE (x, u) k t


x ,u

s.t. : x u (i + 1) = f (x u (i ), u(i )) k − N MHE


u(i )  U , i  [k − N MHE , k − 1]
x u (i )  X , i  [k − N MHE , k ]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)

Disturbed SISO simple example


MHE Estimate
x(k + 1) = f ( x(k ), u (k )) + x measurement

y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.

• Find the best state trajectory that best fit the


considered window of measurements.
• This is done by minimizing the following cost
function. Odometery
k k −1 (inputs)
J N MHE (x, u) =  ~
y (i ) − h(x(i )) +  ~ (i ) − u(i )
2 2
V
u W
i = k − N MHE i = k − N MHE

min J N MHE (x, u) k t


x ,u

s.t. : x u (i + 1) = f (x u (i ), u(i )) k − N MHE


u(i )  U , i  [k − N MHE , k − 1]
x u (i )  X , i  [k − N MHE , k ]

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE)

MHE Estimate
measurement

Sensors (y)
measurements

Odometery
(inputs)

k t

k − N MHE

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 57
• Moving Horizon Estimation (MHE)

MHE Estimate
Advantages of MHE measurement

• Deals with the exact nonlinear models of the Sensors (y)


measurements
system (motion and/or measurement)
• Consider the most recent window of
measurement (Markov assumption is relaxed)
• Handling of states and/or control constraints
Odometery
in a natural way. (inputs)

k t

k − N MHE

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 57
• Considered System and state estimation problem (Differential drive robots)

Control Signals System States

Robot States
reference values
Error
+ Controller
MPC
-


ˆx = xˆ yˆ ˆ
T
 State
Estimation
Sensor
Estimated State
MHE
vector

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 58
• Considered System and state estimation problem (Differential drive robots)

Control Signals System States

Robot States
reference values
Error
+ Controller
MPC
-


ˆx = xˆ yˆ ˆ
T
 State
Estimation
Sensor
Estimated State
MHE
vector

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 58
• Motion and Measurement Models (Differential drive robots)
Disturbed Motion Model

x(k + 1) = f (x(k ), u (k )) YI
 x(k + 1)   x(k )  v (k ) cos  (k )
 y (k + 1) =  y (k ) + T  v (k ) sin  (k ) 
     
 (k + 1)   (k )    (k ) 

r Robot State Vector


x y 
T


O XI
Beacon: relative measurements sensor

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 59
• Motion and Measurement Models (Differential drive robots)
Disturbed Motion Model

x(k + 1) = f (x(k ), u (k )) YI
 x(k + 1)   x(k )  v (k ) cos  (k )
 y (k + 1) =  y (k ) + T  v (k ) sin  (k ) 
     
 (k + 1)   (k )    (k ) 

Nominal control actions


v(.),  (.)
r Robot State Vector
x y 
T


O XI
Beacon: relative measurements sensor

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 59
• Motion and Measurement Models (Differential drive robots)
Disturbed Motion Model

x(k + 1) = f (x(k ), u (k )) YI
 x(k + 1)   x(k )  v (k ) cos  (k )
 y (k + 1) =  y (k ) + T  v (k ) sin  (k ) 
     
 (k + 1)   (k )    (k ) 

Nominal control actions


v(.),  (.)
r Robot State Vector
Disturbed control actions
v (.) = v(.) + v
x y 
T

 (.) =  (.) + 

O XI
Beacon: relative measurements sensor

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 59
• Motion and Measurement Models (Differential drive robots)
Disturbed Motion Model

x(k + 1) = f (x(k ), u (k )) YI
 x(k + 1)   x(k )  v (k ) cos  (k )
 y (k + 1) =  y (k ) + T  v (k ) sin  (k ) 
     
 (k + 1)   (k )    (k ) 

Nominal control actions


v(.),  (.)
r Robot State Vector
Disturbed control actions
v (.) = v(.) + v
x y 
T

 (.) =  (.) + 

 v ,  : XI
O
Additive Gaussian Noise with
Beacon: relative measurements sensor
standard deviations σv, and σω

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 59
• Motion and Measurement Models (Differential drive robots)

Measurement Model
y (k ) = h(x(k )) + ν y
YI
 x2 + y2 
r 
=   +  r 
 
  arctan    
y
    x 

r Robot State Vector


x y 
T


O XI
Beacon: relative measurements sensor

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Motion and Measurement Models (Differential drive robots)

Measurement Model
y (k ) = h(x(k )) + ν y
YI
 x2 + y2 
r 
=   +  r 
 
  arctan    
y
    x 
r : relative range measurement
α : relative bearing measurement

r Robot State Vector


x y 
T


O XI
Beacon: relative measurements sensor

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Motion and Measurement Models (Differential drive robots)

Measurement Model
y (k ) = h(x(k )) + ν y
YI
 x2 + y2 
r 
=   +  r 
 
  arctan    
y
    x 
r : relative range measurement
α : relative bearing measurement

 r ,  :
r Robot State Vector
Additive Gaussian Noise with
standard deviations σr, and σα
x y 
T


O XI
Beacon: relative measurements sensor

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Motion and Measurement Models (Differential drive robots)

Measurement Model
y (k ) = h(x(k )) + ν y
YI
 x2 + y2 
r 
=   +  r 
 
  arctan    
y
    x 
r : relative range measurement
α : relative bearing measurement

 r ,  :
r Robot State Vector
Additive Gaussian Noise with
standard deviations σr, and σα
x y 
T

Observable? 
O XI
Beacon: relative measurements sensor

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Motion and Measurement Models (Differential drive robots)

Measurement Model
y (k ) = h(x(k )) + ν y
YI
 x2 + y2 
r 
=   +  r 
 
  arctan    
y
    x 
r : relative range measurement
α : relative bearing measurement

 r ,  :
r Robot State Vector
Additive Gaussian Noise with
standard deviations σr, and σα
x y 
T

Observable? 
Yes, but ONLY when the linear XI
O
speed v(.) is nonzero.
Beacon: relative measurements sensor

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Moving Horizon Estimation (MHE) Tuning

MHE Optimization problem MHE Estimate


k k −1 measurement
J N MHE (x, u) =  ~
y (i ) − h(x(i )) +  u~(i) − u(i)
2 2
V W
i = k − N MHE i = k − N MHE

Sensors (y)
min J N MHE (x, u) measurements
x ,u

s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i )  U , i  [k − N MHE , k − 1]
x u (i )  X , i  [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)

k t

k−N

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Moving Horizon Estimation (MHE) Tuning

MHE Optimization problem MHE Estimate


k k −1 measurement
J N MHE (x, u) =  ~
y (i ) − h(x(i )) +  u~(i) − u(i)
2 2
V W
i = k − N MHE i = k − N MHE

Sensors (y)
min J N MHE (x, u) measurements
x ,u

s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i )  U , i  [k − N MHE , k − 1]
x u (i )  X , i  [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)
 v ,  : Additive Gaussian Noise with
standard deviations σv, and σω
k t

k−N

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Moving Horizon Estimation (MHE) Tuning

MHE Optimization problem MHE Estimate


k k −1 measurement
J N MHE (x, u) =  ~
y (i ) − h(x(i )) +  u~(i) − u(i)
2 2
V W
i = k − N MHE i = k − N MHE

Sensors (y)
min J N MHE (x, u) measurements
x ,u

s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i )  U , i  [k − N MHE , k − 1]
x u (i )  X , i  [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)
 v ,  : Additive Gaussian Noise with
standard deviations σv, and σω
k t

−1 k−N
 v 0
W=
0   

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Moving Horizon Estimation (MHE) Tuning

MHE Optimization problem MHE Estimate


k k −1 measurement
J N MHE (x, u) =  ~
y (i ) − h(x(i )) +  u~(i) − u(i)
2 2
V W
i = k − N MHE i = k − N MHE

Sensors (y)
min J N MHE (x, u) measurements
x ,u

s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i )  U , i  [k − N MHE , k − 1]
x u (i )  X , i  [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)
 v ,  : Additive Gaussian Noise with
standard deviations σv, and σω
 r ,  : Additive Gaussian Noise with
k t
standard deviations σr, and σα
−1 k−N
 v 0
W=
0   

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Moving Horizon Estimation (MHE) Tuning

MHE Optimization problem MHE Estimate


k k −1 measurement
J N MHE (x, u) =  ~
y (i ) − h(x(i )) +  u~(i) − u(i)
2 2
V W
i = k − N MHE i = k − N MHE

Sensors (y)
min J N MHE (x, u) measurements
x ,u

s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i )  U , i  [k − N MHE , k − 1]
x u (i )  X , i  [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)
 v ,  : Additive Gaussian Noise with
standard deviations σv, and σω
 r ,  : Additive Gaussian Noise with
k t
standard deviations σr, and σα
−1 −1 k−N
 r 0  v 0
V= W=
0    0   

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Simulating the disturbed system and visualizing simulation results
t0 = 0;
x0 = [0.1 ; 0.1 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs for each robot
X0 = repmat(x0,1,N+1)'; % initialization of the states decision variables
sim_tim = 20; % total sampling times
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
while(norm((x0-xs),2) > 0.05 && mpciter < sim_tim / T)
args.p = [x0;xs]; % set the values of the parameters vector
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N+1),1);reshape(u0',2*N,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x(3*(N+1)+1:end))',2,N)'; % get controls only from the solution
xx1(:,1:3,mpciter+1)= reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get sol. TRAJECTORY
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
% Apply the control and shift the solution
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
X0 = reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get solution TRAJECTORY
% Shift trajectory to initialize the next step
X0 = [X0(2:end,:);X0(end,:)];
mpciter = mpciter + 1;
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 62
• Simulating the disturbed system and visualizing simulation results
t0 = 0;
x0 = [0.1 ; 0.1 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
function [t0, x0, u0] = shift(T, t0, x0, u,f)
t(1) = t0;
% add noise to the control actions before applying it
u0 = zeros(N,2); % two control inputs for each robot
con_cov = diag([0.005 deg2rad(2)]).^2;
X0 = repmat(x0,1,N+1)'; % initialization of the states decision variables
con = u(1,:)' + sqrt(con_cov)*randn(2,1);
sim_tim = 20; % total sampling times
st = x0;
% Start MPC
mpciter = 0;
f_value = f(st,con);
xx1 = [];
st = st+ (T*f_value);
u_cl=[];
while(norm((x0-xs),2) > 0.05 && mpciter < sim_tim / T)
x0 = full(st);
args.p = [x0;xs]; % set the values of the parameters vector
t0 = t0 + T;
% initial value of the optimization variables
u0 = [u(2:size(u,1),:);u(size(u,1),:)]; % shift the control action
args.x0 = [reshape(X0',3*(N+1),1);reshape(u0',2*N,1)];
end
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x(3*(N+1)+1:end))',2,N)'; % get controls only from the solution
xx1(:,1:3,mpciter+1)= reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get sol. TRAJECTORY
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
% Apply the control and shift the solution
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
X0 = reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get solution TRAJECTORY
% Shift trajectory to initialize the next step
X0 = [X0(2:end,:);X0(end,:)];
mpciter = mpciter + 1;
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 62
• Simulating the disturbed system and visualizing simulation results
t0 = 0;
x0 = [0.1 ; 0.1 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs for each robot
X0 = repmat(x0,1,N+1)'; % initialization of the states decision variables
sim_tim = 20; % total sampling times
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
while(norm((x0-xs),2) > 0.05 && mpciter < sim_tim / T)
args.p = [x0;xs]; % set the values of the parameters vector
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N+1),1);reshape(u0',2*N,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x(3*(N+1)+1:end))',2,N)'; % get controls only from the solution
xx1(:,1:3,mpciter+1)= reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get sol. TRAJECTORY
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
% Apply the control and shift the solution
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
X0 = reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get solution TRAJECTORY
% Shift trajectory to initialize the next step
X0 = [X0(2:end,:);X0(end,:)];
mpciter = mpciter + 1;
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 62
• Simulating the disturbed system and visualizing simulation results

Nominal control actions Resulted closed loop


sent by the controller performance of robot’s states

In this presentation, we are doing the state estimation offline, i.e. we simulate the
disturbed system first, synthesize the measurements, and finally apply the state
estimator (MHE)

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 63
• Simulating the disturbed system and visualizing simulation results

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 64
• Simulating the disturbed system and visualizing simulation results

% Synthesize the measurments y (k ) = h(x(k )) + ν y


con_cov = diag([0.005 deg2rad(2)]).^2;
meas_cov = diag([0.1 deg2rad(2)]).^2;  x2 + y2 
  
r
=  +  r 
  arctan y   
r = [];       
alpha = [];   x 
for k = 1: length(xx(1,:))-1
r = [r; sqrt(xx(1,k)^2+xx(2,k)^2) + sqrt(meas_cov(1,1))*randn(1)];
alpha = [alpha; atan(xx(2,k)/xx(1,k)) + sqrt(meas_cov(2,2))*randn(1)];
end
y_measurements = [ r , alpha ];

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 64
• Simulating the disturbed system and visualizing simulation results

% Synthesize the measurments y (k ) = h(x(k )) + ν y


con_cov = diag([0.005 deg2rad(2)]).^2;
meas_cov = diag([0.1 deg2rad(2)]).^2;  x2 + y2 
  
r
=  +  r 
  arctan y   
r = [];       
alpha = [];   x 
for k = 1: length(xx(1,:))-1
r = [r; sqrt(xx(1,k)^2+xx(2,k)^2) + sqrt(meas_cov(1,1))*randn(1)];
alpha = [alpha; atan(xx(2,k)/xx(1,k)) + sqrt(meas_cov(2,2))*randn(1)];
end
y_measurements = [ r , alpha ];

% Plot the cartesian coordinates from the measurements used


figure(1)
subplot(311)
plot(t,r.*cos(alpha),'r','linewidth',1.5); hold on
grid on
legend('Ground Truth','Measurement')
subplot(312)
plot(t,r.*sin(alpha),'r','linewidth',1.5); hold on
grid on

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 64
• Simulating the disturbed system and visualizing simulation results

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 65
• Simulating the disturbed system and visualizing simulation results

• Indeed, by simple resolution we can


obtain the states x and y.
• However, the resulting states are noisy
when compared to the ground truth
data.
• Moreover, this method cannot be
used to obtain the robot angular
deviation (θ).
• Therefore, a state estimation scheme,
e.g. MHE can be used to estimate the
missing state (θ) and improve the
estimates of states (x, y).

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 65
• Implementation of MHE using Matlab + Casadi Package

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements.
%-------------------------------------------------------------------------
u_cl;
y_measurements;

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements. u_cl = y_measurements =
%-------------------------------------------------------------------------
0.6000 0.7854 0.3438 0.7066
u_cl; 0.6000 0.7854 0.4655 0.4368
0.6000 0.7854 0.4598 0.2791
y_measurements; 0.6000 0.7854 0.4211 0.3232
0.6000 0.7854 0.6413 0.3020
0.6000 0.7854 0.7621 0.3619
0.6000 0.7854 0.8330 0.4820
0.6000 0.7373 0.9660 0.5572
0.6000 0.0666 1.1363 0.5600
0.6000 -0.1554 1.1598 0.6155
0.6000 -0.3171 1.1015 0.7358
0.6000 -0.4021 1.3396 0.7475
0.6000 -0.4870 1.5397 0.8035
0.6000 -0.5883 1.5984 0.8494
0.6000 -0.6573 1.7741 0.8272
0.5870 -0.6576 1.7323 0.8000
0.3723 -0.5833 2.0366 0.7612
0.2422 -0.4932 1.9872 0.7414
0.1599 -0.3761 2.1376 0.8318
0.1081 -0.3184 2.0642 0.7928
0.0741 -0.2628 1.8335 0.8108
0.0483 -0.2088 1.8759 0.7057
0.0297 -0.1514 2.1112 0.7509
0.0201 -0.1320 2.1490 0.8060
0.0119 -0.1031 2.1969 0.7570
0.0073 -0.0791 2.1573 0.7842
0.0037 -0.0647 2.1962 0.7988

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements.
%-------------------------------------------------------------------------
u_cl;
y_measurements;

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements.
%-------------------------------------------------------------------------
u_cl;
y_measurements;

T = 0.2; %[s]
N_MHE = size(y_measurements,1)-1; % Estimation horizon

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);
v = SX.sym('v'); omega = SX.sym('omega');
controls = [v;omega]; n_controls = length(controls);

rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s


f = Function('f',{states,controls},{rhs}); % MOTION MODEL

r = SX.sym('r'); alpha = SX.sym('alpha'); % range and bearing


measurement_rhs = [sqrt(x^2+y^2); atan(y/x)];
h = Function('h',{states},{measurement_rhs}); % MEASUREMENT MODEL

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements.
%-------------------------------------------------------------------------
u_cl;
y_measurements; First, we will estimate the whole
closed loop trajectory in one MHE
T = 0.2; %[s] step
N_MHE = size(y_measurements,1)-1; % Estimation horizon

v_max = 0.6; v_min = -v_max;


omega_max = pi/4; omega_min = -omega_max;

x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');


states = [x;y;theta]; n_states = length(states);
v = SX.sym('v'); omega = SX.sym('omega');
controls = [v;omega]; n_controls = length(controls);

rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s


f = Function('f',{states,controls},{rhs}); % MOTION MODEL

r = SX.sym('r'); alpha = SX.sym('alpha'); % range and bearing


measurement_rhs = [sqrt(x^2+y^2); atan(y/x)];
h = Function('h',{states},{measurement_rhs}); % MEASUREMENT MODEL

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
% Decision variables
U = SX.sym('U',n_controls,N_MHE); %(controls)
X = SX.sym('X',n_states,(N_MHE+1)); %(states) [remember multiple shooting]

P = SX.sym('P', 2 , (N_MHE+1) + N_MHE);


% parameters (include r and alpha measurements as well as controls measurements)
V = inv(sqrt(meas_cov)); % weighing matrices (output) y_tilde - y
W = inv(sqrt(con_cov)); % weighing matrices (input) u_tilde - u

obj = 0; % Objective function


g = []; % constraints vector
k k −1
for k = 1:N_MHE+1
= ~
 − +  ~ (i ) − u(i )
2 2
st = X(:,k); J N MHE
( x, u ) y (i ) h ( x (i )) V
u W
i = k − N MHE i = k − N MHE
h_x = h(st);
y_tilde = P(:,k);
obj = obj+ (y_tilde-h_x)' * V * (y_tilde-h_x); % calculate obj
end

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 67
% Decision variables
U = SX.sym('U',n_controls,N_MHE); %(controls)
X = SX.sym('X',n_states,(N_MHE+1)); %(states) [remember multiple shooting]

P = SX.sym('P', 2 , (N_MHE+1) + N_MHE);


% parameters (include r and alpha measurements as well as controls measurements)
V = inv(sqrt(meas_cov)); % weighing matrices (output) y_tilde - y
W = inv(sqrt(con_cov)); % weighing matrices (input) u_tilde - u

obj = 0; % Objective function


g = []; % constraints vector
k k −1
for k = 1:N_MHE+1
= ~
 − +  ~ (i ) − u(i )
2 2
st = X(:,k); J N MHE
( x, u ) y (i ) h ( x (i )) V
u W
i = k − N MHE i = k − N MHE
h_x = h(st);
y_tilde = P(:,k);
obj = obj+ (y_tilde-h_x)' * V * (y_tilde-h_x); % calculate obj
end
for k = 1:N_MHE
con = U(:,k);
u_tilde = P(:,N_MHE+ k);
obj = obj+ (u_tilde-con)' * W * (u_tilde-con); % calculate obj
end

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 67
% Decision variables
U = SX.sym('U',n_controls,N_MHE); %(controls)
X = SX.sym('X',n_states,(N_MHE+1)); %(states) [remember multiple shooting]

P = SX.sym('P', 2 , (N_MHE+1) + N_MHE);


% parameters (include r and alpha measurements as well as controls measurements)
V = inv(sqrt(meas_cov)); % weighing matrices (output) y_tilde - y
W = inv(sqrt(con_cov)); % weighing matrices (input) u_tilde - u

obj = 0; % Objective function


g = []; % constraints vector
k k −1
for k = 1:N_MHE+1
=  ~ − +  ~ (i ) − u(i ) 2
2
st = X(:,k); J N MHE ( x, u ) y (i ) h ( x (i )) V
u W
i = k − N MHE i = k − N MHE
h_x = h(st);
y_tilde = P(:,k);
obj = obj+ (y_tilde-h_x)' * V * (y_tilde-h_x); % calculate obj
end
for k = 1:N_MHE
con = U(:,k);
u_tilde = P(:,N_MHE+ k);
obj = obj+ (u_tilde-con)' * W * (u_tilde-con); % calculate obj
end
% multiple shooting constraints
for k = 1:N_MHE
st = X(:,k); con = U(:,k);
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 67
% make the decision variable one column vector
OPT_variables = [reshape(X,3*(N_MHE+1),1);reshape(U,2*N_MHE,1)];

nlp_mhe = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);

opts = struct;
opts.ipopt.max_iter = 2000;
opts.ipopt.print_level =0;%0,3
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
opts.ipopt.acceptable_obj_change_tol = 1e-6;

solver = nlpsol('solver', 'ipopt', nlp_mhe,opts);

args = struct;

args.lbg(1:3*(N_MHE)) = 0; % equality constraints


args.ubg(1:3*(N_MHE)) = 0; % equality constraints

args.lbx(1:3:3*(N_MHE+1),1) = -2; %state x lower bound


args.ubx(1:3:3*(N_MHE+1),1) = 2; %state x upper bound
args.lbx(2:3:3*(N_MHE+1),1) = -2; %state y lower bound
args.ubx(2:3:3*(N_MHE+1),1) = 2; %state y upper bound
args.lbx(3:3:3*(N_MHE+1),1) = -pi/2; %state theta lower bound
args.ubx(3:3:3*(N_MHE+1),1) = pi/2; %state theta upper bound

args.lbx(3*(N_MHE+1)+1:2:3*(N_MHE+1)+2*N_MHE,1) = v_min; %v lower bound


args.ubx(3*(N_MHE+1)+1:2:3*(N_MHE+1)+2*N_MHE,1) = v_max; %v upper bound
args.lbx(3*(N_MHE+1)+2:2:3*(N_MHE+1)+2*N_MHE,1) = omega_min; %omega lower bound
args.ubx(3*(N_MHE+1)+2:2:3*(N_MHE+1)+2*N_MHE,1) = omega_max; %omega upper bound
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 68
% MHE Simulation
%------------------------------------------
U0 = zeros(N_MHE,2); % two control inputs for each robot
X0 = zeros(N_MHE+1,3); % initialization of the states decision variables

U0 = u_cl(1:N_MHE,:); % initialize the control actions by the measured


% initialize the states from the measured range and bearing
X0(:,1:2) = [y_measurements(1:N_MHE+1,1).*cos(y_measurements(1:N_MHE+1,2)),...
y_measurements(1:N_MHE+1,1).*sin(y_measurements(1:N_MHE+1,2))];

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 69
% MHE Simulation
%------------------------------------------
U0 = zeros(N_MHE,2); % two control inputs for each robot
X0 = zeros(N_MHE+1,3); % initialization of the states decision variables

U0 = u_cl(1:N_MHE,:); % initialize the control actions by the measured


% initialize the states from the measured range and bearing
X0(:,1:2) = [y_measurements(1:N_MHE+1,1).*cos(y_measurements(1:N_MHE+1,2)),...
y_measurements(1:N_MHE+1,1).*sin(y_measurements(1:N_MHE+1,2))];

k=1;
% Get the measurements window and put it as parameters in p
args.p = [y_measurements(k:k+N_MHE,:)',u_cl(k:k+N_MHE-1,:)'];
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N_MHE+1),1);reshape(U0',2*N_MHE,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
U_sol = reshape(full(sol.x(3*(N_MHE+1)+1:end))',2,N_MHE)';
% get controls only from the solution
X_sol = reshape(full(sol.x(1:3*(N_MHE+1)))',3,N_MHE+1)';
% get solution TRAJECTORY

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 69
% MHE Simulation
%------------------------------------------
U0 = zeros(N_MHE,2); % two control inputs for each robot
X0 = zeros(N_MHE+1,3); % initialization of the states decision variables

U0 = u_cl(1:N_MHE,:); % initialize the control actions by the measured


% initialize the states from the measured range and bearing
X0(:,1:2) = [y_measurements(1:N_MHE+1,1).*cos(y_measurements(1:N_MHE+1,2)),...
y_measurements(1:N_MHE+1,1).*sin(y_measurements(1:N_MHE+1,2))];

k=1;
% Get the measurements window and put it as parameters in p
args.p = [y_measurements(k:k+N_MHE,:)',u_cl(k:k+N_MHE-1,:)'];
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N_MHE+1),1);reshape(U0',2*N_MHE,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
U_sol = reshape(full(sol.x(3*(N_MHE+1)+1:end))',2,N_MHE)';
% get controls only from the solution
X_sol = reshape(full(sol.x(1:3*(N_MHE+1)))',3,N_MHE+1)';
% get solution TRAJECTORY
figure(1)
subplot(311)
plot(t,X_sol(:,1),'g','linewidth',1.5); hold on
legend('Ground Truth','Measurement','MHE')
subplot(312)
plot(t,X_sol(:,2),'g','linewidth',1.5); hold on
subplot(313)
plot(t,X_sol(:,3),'g','linewidth',1.5); hold on
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 69
• Estimation results

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 70
% MHE Simulation loop starts here
%------------------------------------------
X_estimate = []; % X_estimate contains the MHE estimate of the states
U_estimate = []; % U_estimate contains the MHE estimate of the controls
U0 = zeros(N_MHE,2); % two control inputs for each robot
X0 = zeros(N_MHE+1,3); % initialization of the states decision variables
% Start MHE
mheiter = 0;

U0 = u_cl(1:N_MHE,:); % initialize the control actions by the measured


% initialize the states from the measured range and bearing
X0(:,1:2) = [y_measurements(1:N_MHE+1,1).*cos(y_measurements(1:N_MHE+1,2)),...
y_measurements(1:N_MHE+1,1).*sin(y_measurements(1:N_MHE+1,2))];

for k = 1: size(y_measurements,1) - (N_MHE)


mheiter = k
% Get the measurements window and put it as parameters in p
args.p = [y_measurements(k:k+N_MHE,:)',u_cl(k:k+N_MHE-1,:)'];
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N_MHE+1),1);reshape(U0',2*N_MHE,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
U_sol = reshape(full(sol.x(3*(N_MHE+1)+1:end))',2,N_MHE)';
% get controls only from the solution
X_sol = reshape(full(sol.x(1:3*(N_MHE+1)))',3,N_MHE+1)'; % get solution TRAJECTORY
X_estimate = [X_estimate;X_sol(N_MHE+1,:)];
U_estimate = [U_estimate;U_sol(N_MHE,:)];

% Shift trajectory to initialize the next step


X0 = [X_sol(2:end,:);X_sol(end,:)];
U0 = [U_sol(2:end,:);U_sol(end,:)];
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 71
% MHE Simulation loop starts here
%------------------------------------------
X_estimate = []; % X_estimate contains the MHE estimate of the states
U_estimate = []; % U_estimate contains the MHE estimate of the controls
U0 = zeros(N_MHE,2); % two control inputs for each robot
Now, we will estimate the whole
X0 = zeros(N_MHE+1,3); % initialization of the states decision variables
% Start MHE closed loop trajectory in a moving
mheiter = 0; horizon fashion
U0 = u_cl(1:N_MHE,:); % initialize the control actions by the measured
% initialize the states from the measured range and bearing
X0(:,1:2) = [y_measurements(1:N_MHE+1,1).*cos(y_measurements(1:N_MHE+1,2)),...
y_measurements(1:N_MHE+1,1).*sin(y_measurements(1:N_MHE+1,2))];

for k = 1: size(y_measurements,1) - (N_MHE)


mheiter = k
% Get the measurements window and put it as parameters in p
args.p = [y_measurements(k:k+N_MHE,:)',u_cl(k:k+N_MHE-1,:)'];
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N_MHE+1),1);reshape(U0',2*N_MHE,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
U_sol = reshape(full(sol.x(3*(N_MHE+1)+1:end))',2,N_MHE)';
% get controls only from the solution
X_sol = reshape(full(sol.x(1:3*(N_MHE+1)))',3,N_MHE+1)'; % get solution TRAJECTORY
X_estimate = [X_estimate;X_sol(N_MHE+1,:)];
U_estimate = [U_estimate;U_sol(N_MHE,:)];

% Shift trajectory to initialize the next step


X0 = [X_sol(2:end,:);X_sol(end,:)];
U0 = [U_sol(2:end,:);U_sol(end,:)];
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 71
• Estimation results

N MHE = 6 N MHE = 8

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 72
• Estimation results

Nominal control actions


sent by the controller
N MHE = 6

Observable?
Yes, but ONLY when the linear speed v(.) is nonzero.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 73
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 74
• Concluding remarks about MPC and MHE
• Terminologies you are familiar with now

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 75
• Concluding remarks about MPC and MHE
• Terminologies you are familiar with now

• Numerical Optimization, NLP’s, QP’s, LP’s, Cost function, Decision


variables, constraints (equality and inequality), Local minimum,
Global minimum, objective minimization, objective maximization.
• Model Predictive Control, Controllability, Prediction model,
Prediction Horizon, OCP’s, Single Shooting, Multiple Shooting,
obstacle avoidance.
• State estimation, Observability, Moving Horizon Estimation,
Estimation Horizon.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 75
• Concluding remarks about MPC and MHE

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 76
• Concluding remarks about MPC and MHE

• MPC and MHE are optimization based methods for control and state
estimation.
• Both Methods can be applied to nonlinear MIMO systems.
• Their mathematical formulations are similar (i.e. OCPs).
• Physical constraints can be easily incorporated in the related OCPs.
• In order to solve a given OCP numerically, we need to transform it to
a nonlinear programming problem (NLP).
• Single shooting and multiple shooting are methods to express an
OCP as an NLP.
• Implementation of MPC and MHE can be fairly straightforward using
off-the-shelf-software packages, e.g. CasADi.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 76
• What is NEXT?

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 77
• What is NEXT?

• The given code can be adapted, in a general sense, to any similar MIMO system.
• Implementation can be accelerated using the code generation feature in Casadi,
see the manual for more details.
• For real-time implementation, you may also consider looking at ACADO-toolkit
(Currently, not as supported as Casadi); a possibly newer version of ACADO is
known as ACADOS (https://github.com/acados)
• Combining MHE with MPC. A very good Exercise! ☺
• Distributed Model Predictive Control (DMPC).
• …

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 77
• What is NEXT?

• The given code can be adapted, in a general sense, to any similar MIMO system.
• Implementation can be accelerated using the code generation feature in Casadi,
see the manual for more details.
• For real-time implementation, you may also consider looking at ACADO-toolkit
(Currently, not as supported as Casadi); a possibly newer version of ACADO is
known as ACADOS (https://github.com/acados)
• Combining MHE with MPC. A very good Exercise! ☺
• Distributed Model Predictive Control (DMPC).
• …

MHE and MPC demo

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 77
• Publications
• Mohamed W. Mehrez, Optimization Based Solutions for Control and State Estimation in
Non-holonomic Mobile Robots: Stability, Distributed Control, and Relative Localization,
PhD Thesis, Memorial University of Newfoundland.

• MPC Stability
• Karl Worthmann, Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, Jürgen
Pannek, Interaction of Open and Closed Loop Control in MPC, Automatica, vol. 82, no. -,
pp. 243-250, 2017.
• Mohamed W. Mehrez, Karl Worthmann, George K. I. Mann, Raymond G. Gosine, Timm
Faulwasser, Predictive Path Following of Mobile Robots without Terminal Stabilizing
Single-Robot Control

Constraints, in Proceedings of the IFAC 2017 World Congress, Toulouse, France, 2017.
• Mohamed W. Mehrez, Karl Worthmann, George K. I. Mann, Raymond G. Gosine, Jürgen
Pannek, Experimental Speedup and Stability Validation for Multi-Step MPC, in Proceedings
of the IFAC 2017 World Congress, Toulouse, France, 2017.
• Karl Worthmann, Mohamed W. Mehrez, Mario Zanon, George K. I. Mann, Raymond G.
Gosine, Moritz Diehl, Model Predictive Control of Nonholonomic Mobile Robots without
Stabilizing Constraints and Costs, IEEE Transactions on Control Systems Technology, vol. 24,
no. 4, pp. 1394-1406, 2016.
• Karl Worthmann, Mohamed W. Mehrez, Mario Zanon, George K. I. Mann, Raymond G.
Gosine, Moritz Diehl, Regulation of Differential Drive Robots Using Continuous Time MPC
Without Stabilizing Constraints or Costs, Proceedings of the 5th IFAC Conference on
Nonlinear Model Predictive Control (NPMC’15), Sevilla, Spain, pp. 129-135, Spain, 2015.

Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 78
• Publications
• MPC and MHE Implementation Studies
• Mohamed W. Mehrez, Tobias Sprodowski, Karl Worthmann, George K. I. Mann, Raymond G.
Multi-Robot Control

Gosine, Juliana K. Sagawa, Jürgen Pannek, Occupancy Grid based Distributed MPC for Mobile
Robots, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2017), Vancouver, Canada.
• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, "Formation stabilization of
nonholonomic robots using nonlinear model predictive control," IEEE 27th Canadian Conference
on Electrical and Computer Engineering (CCECE), pp.1-6, Toronto, ON, Canada, 2014.
• Tobias Sprodowski, Mohamed W. Mehrez, Karl Worthmann, George K.I. Mann, Raymond G.
Gosine, Juliana K. Sagawa, Jürgen Pannek, Differential communication with distributed MPC
based on occupancy grid, Information Sciences, Volume 453, 2018
• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, An Optimization Based Approach
for Relative Localization and Relative Tracking Control in Multi-Robot Systems, Journal of
Intelligent and Robotic Systems, vol. 85, no. 2, pp. 385–408, 2017.
Single-Robot Control MHE

• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, "Nonlinear moving horizon state
estimation for multi-robot relative localization," IEEE 27th Canadian Conference on Electrical and
Computer Engineering (CCECE), pp.1-5, Toronto, ON, Canada, 2014.
• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, " Comparison of Stabilizing NMPC
Designs for Wheeled Mobile Robots: an Experimental Study," Moratuwa Engineering Research
Conference, pp. 130-135, Moratuwa, Sri Lanka, 2015.
• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, "Stabilizing NMPC of wheeled
mobile robots using open-source real-time software," 16th International Conference on
Advanced Robotics (ICAR), pp.1-6, Uruguay, 2013.
Thank you! [email protected]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 79

You might also like