MPC MHE Slides
MPC MHE Slides
January, 2019
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 1
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 2
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 3
• Background1
Why Optimization?
Minimization
min (w )
w
s.t. g1 (w ) 0 ,
g 2 (w ) = 0.
Minimization Maximization
min (w ) max (w )
w w
s.t. g1 (w ) 0 , s.t. g1 (w ) 0 ,
g 2 (w ) = 0. g 2 (w ) = 0.
Minimization Maximization
min (w ) max (w ) min − (w )
w w w
Minimization Maximization
min (w ) max (w ) min − (w )
w w w
s.t. w 0 ,
w 4 .
Minimization Maximization
min (w ) max (w ) min − (w )
w w w
min (w )
w
s.t. g1 (w ) 0 ,
g 2 (w ) = 0.
min (w )
w
s.t. g1 (w ) 0 ,
g 2 (w ) = 0.
w = arg min (w )
w
min (w )
w
s.t. g1 (w ) 0 ,
g 2 (w ) = 0.
w = arg min (w )
w
By direct substitution we can get the corresponding value of the objective function
(w ) := (w ) w
:= min (w )
w
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
( w) = w2 − 6w + 13
Writing this problem as an NLP, we have
min w2 − 6 w + 13
w
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
optimizer.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw
Then, we find the point at which the
gradient is zero, i.e. the solution of
the minimization problem
w=3
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw
Then, we find the point at which the
gradient is zero, i.e. the solution of
the minimization problem
w=3
The value of ϕ w at the solution (𝑤 = 3)
is equal to 4.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 8
• Motivation Example 1
Find the local minimum of the following function
( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw
Then, we find the point at which the
gradient is zero, i.e. the solution of
the minimization problem
w=3
The value of ϕ w at the solution (𝑤 = 3)
is equal to 4.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 9
CasADi1 https://web.casadi.org/get/
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 9
• Motivation Example 1
Find the local minimum of the following function
( w) = w2 − 6w + 13 Note that this optimization
Writing this problem as an NLP, we have problem is unconstrained, i.e.
the optimization variable (w)
min w2 − 6 w + 13
w can be freely chosen by the
Analytically, we first find an expression for optimizer.
the gradient, i.e.
d ( w)
= 2w − 6
dw
Then, we find the point at which the
gradient is zero, i.e. the solution of
the minimization problem
w=3
The value of ϕ w at the solution (𝑤 = 3)
is equal to 4.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
>> x
x =
w
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
>> nlp_prob
struct with fields:
f: [1×1 casadi.SX]
x: [1×1 casadi.SX]
g: []
p: []
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 11
opts = struct;
opts.ipopt.max_iter = 100;
opts.ipopt.print_level = 0; %0,3
opts.print_time = 0; %0,1
opts.ipopt.acceptable_tol =1e-8;
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
opts = struct; Ipopt (Interior Point Optimizer)* is
opts.ipopt.max_iter = 100; an open source software package for
opts.ipopt.print_level = 0; %0,3 large-scale nonlinear optimization. It
opts.print_time = 0; %0,1 can be used to solve general
opts.ipopt.acceptable_tol =1e-8;
nonlinear programming problems
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6; (NLPs)
* Check IPOPT manual for more details
solver = nlpsol('solver', 'ipopt', nlp_prob,opts); about the options you can set.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
opts = struct; Ipopt (Interior Point Optimizer)* is
opts.ipopt.max_iter = 100; an open source software package for
opts.ipopt.print_level = 0; %0,3 large-scale nonlinear optimization. It
opts.print_time = 0; %0,1 can be used to solve general
opts.ipopt.acceptable_tol =1e-8;
nonlinear programming problems
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6; (NLPs)
* Check IPOPT manual for more details
solver = nlpsol('solver', 'ipopt', nlp_prob,opts); about the options you can set.
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
opts = struct; Ipopt (Interior Point Optimizer)* is
opts.ipopt.max_iter = 100; an open source software package for
opts.ipopt.print_level = 0; %0,3 large-scale nonlinear optimization. It
opts.print_time = 0; %0,1 can be used to solve general
opts.ipopt.acceptable_tol =1e-8;
nonlinear programming problems
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6; (NLPs)
* Check IPOPT manual for more details
solver = nlpsol('solver', 'ipopt', nlp_prob,opts); about the options you can set.
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 12
opts = struct; Ipopt (Interior Point Optimizer)* is
opts.ipopt.max_iter = 100; an open source software package for
opts.ipopt.print_level = 0; %0,3 large-scale nonlinear optimization. It
opts.print_time = 0; %0,1 can be used to solve general
opts.ipopt.acceptable_tol =1e-8;
nonlinear programming problems
% optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6; (NLPs)
* Check IPOPT manual for more details
solver = nlpsol('solver', 'ipopt', nlp_prob,opts); about the options you can set.
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization
s.t. w 0 ,
w 4 .
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 13
• Motivation Example 2
Solve the following optimization problem
s.t. w 0 ,
w 4 .
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 13
Solving using CasADi
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 14
Solving using CasADi
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
opts = struct;
opts.ipopt.max_iter = 1000;
opts.ipopt.print_level = 3; %0,3
opts.print_time = 0; %0,1
opts.ipopt.acceptable_tol =1e-8; % optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
Solving using CasADi
args.p = []; % There are no parameters in this optimization problem
args.x0 = 1; % initialization of the optimization problem
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
Solving using CasADi
args.p = []; % There are no parameters in this optimization problem
args.x0 = 1; % initialization of the optimization problem
>>
args.x0 =
1
x_sol =
0
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
Solving using CasADi
args.p = []; % There are no parameters in this optimization problem
args.x0 = 1; % initialization of the optimization problem
>>
args.x0 =
1
x_sol =
0
>>
args.x0 =
4
x_sol =
4.9098
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
Solving using CasADi
args.p = []; % There are no parameters in this optimization problem
args.x0 = 1; % initialization of the optimization problem
>>
args.x0 =
1
x_sol =
0
>>
args.x0 =
4
x_sol =
4.9098
>>
args.x0 =
10
x_sol =
11.1930
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 15
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.
ndata
(m, c) = ( y (i ) − (m x(i ) + c) )
2
i =1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.
ndata
(m, c) = ( y (i ) − (m x(i ) + c) )
2
i =1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
• Motivation Example 3
For the following set of data points, fit a straight line of the form
y = m x + c
• Here, we minimize the sum of the
squared errors, between the line and
the data points (Least squares).
• Two optimization variables (m and c).
• The objective function is simply the
sum of the squared error.
ndata
(m, c) = ( y (i ) − (m x(i ) + c) )
2
i =1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 16
Implementation:
clear all
close all
clc
x = [0,45,90,135,180];
y = [667,661,757,871,1210];
figure(1)
plot(x,y,'*b', 'linewidth',line_width); hold on
xlabel('x')
ylabel('y')
grid on
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 17
% CasADi v3.1.1
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
obj = 0;
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2;
end
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 18
% CasADi v3.1.1
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
obj = 0; ndata
min ( y (i ) − (m x(i ) + c) )
2
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2; m ,c
i =1
end
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 18
% CasADi v3.1.1
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
obj = 0; ndata
min ( y (i ) − (m x(i ) + c) )
2
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2; m ,c
i =1
end
>> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-((90*m)+c))))+sq((871-
((135*m)+c))))+sq((1210-((180*m)+c)))))
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 18
% CasADi v3.1.1
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
obj = 0; ndata
min ( y (i ) − (m x(i ) + c) )
2
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2; m ,c
i =1
end
>> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-((90*m)+c))))+sq((871-
((135*m)+c))))+sq((1210-((180*m)+c)))))
opts = struct;
opts.ipopt.max_iter = 1000;
opts.ipopt.print_level = 0; %0,3
opts.print_time = 0; %0,1
opts.ipopt.acceptable_tol =1e-8; % optimality convergence tolerance
opts.ipopt.acceptable_obj_change_tol = 1e-6;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 19
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization
x_line = [0:1:180];
m_sol = x_sol(1)
c_sol = x_sol(2)
y_line = m_sol*x_line+c_sol;
figure(1)
plot(x_line,y_line,'-k', 'linewidth',line_width); hold on
legend('Data points','y = 2.88 \cdot x + 574')
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 19
args = struct;
args.lbx = -inf; % unconstrained optimization
args.ubx = inf; % unconstrained optimization
args.lbg = -inf; % unconstrained optimization
args.ubg = inf; % unconstrained optimization
x_line = [0:1:180];
m_sol = x_sol(1)
c_sol = x_sol(2)
y_line = m_sol*x_line+c_sol;
figure(1)
plot(x_line,y_line,'-k', 'linewidth',line_width); hold on
legend('Data points','y = 2.88 \cdot x + 574')
m_sol = min_value =
2.8800 38527
c_sol =
574.0000
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 19
Result visualization:
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 20
Result visualization:
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 20
Result visualization:
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 20
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)
obj = 0;
for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2;
end
>> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)
% Function object in casadi
obj = 0;
obj_fun = Function('obj_fun',{m,c},{obj}); for i = 1:length(x)
obj = obj+ (y(i) - (m*x(i)+c))^2;
end
>> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)
% Function object in casadi
obj = 0;
obj_fun = Function('obj_fun',{m,c},{obj}); for i = 1:length(x)
m_range = [-1:0.5:6]; obj = obj+ (y(i) - (m*x(i)+c))^2;
end
c_range = [400:50:800];
obj_plot_data = []; >> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
[mm,cc] = meshgrid(m_range,c_range); ((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))
for n = 1:1:size(mm,1)
for k = 1:1:size(mm,2) %
obj_plot_data(n,k) = full(obj_fun(mm(n,k),cc(n,k))) ;
end
end
figure
surf(mm,cc,obj_plot_data); hold on
xlabel('(m)')
ylabel('(c)')
zlabel('(\phi)')
box on
ax = gca;
ax.BoxStyle = 'full';
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)
% Function object in casadi
obj = 0;
obj_fun = Function('obj_fun',{m,c},{obj}); for i = 1:length(x)
m_range = [-1:0.5:6]; obj = obj+ (y(i) - (m*x(i)+c))^2;
end
c_range = [400:50:800];
obj_plot_data = []; >> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
[mm,cc] = meshgrid(m_range,c_range); ((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))
for n = 1:1:size(mm,1)
for k = 1:1:size(mm,2) %
obj_plot_data(n,k) = full(obj_fun(mm(n,k),cc(n,k))) ;
end
end
figure
surf(mm,cc,obj_plot_data); hold on
xlabel('(m)')
ylabel('(c)')
zlabel('(\phi)')
box on
ax = gca;
ax.BoxStyle = 'full';
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
m = SX.sym('m'); % Decision variable (slope)
Objective visualization: Recall c = SX.sym('c'); % Decision variable (y-intersection)
% Function object in casadi
obj = 0;
obj_fun = Function('obj_fun',{m,c},{obj}); for i = 1:length(x)
m_range = [-1:0.5:6]; obj = obj+ (y(i) - (m*x(i)+c))^2;
end
c_range = [400:50:800];
obj_plot_data = []; >> obj =
(((((sq((667-c))+sq((661-((45*m)+c))))+sq((757-
[mm,cc] = meshgrid(m_range,c_range); ((90*m)+c))))+sq((871-((135*m)+c))))+sq((1210-
((180*m)+c)))))
for n = 1:1:size(mm,1)
for k = 1:1:size(mm,2) %
obj_plot_data(n,k) = full(obj_fun(mm(n,k),cc(n,k))) ;
end
end
figure
surf(mm,cc,obj_plot_data); hold on
xlabel('(m)')
ylabel('(c)')
zlabel('(\phi)')
box on
ax = gca;
ax.BoxStyle = 'full';
min_value =
>> 38527
min(min(obj_plot_data))
ans = Minimum value
39690 from optimization
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 21
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 22
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
x(k + 1) = f ( x(k ), u (k ))
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past
Single input single output simple example
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
u ()
k t
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
u ()
k t
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
u ()
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
u ()
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
u ()
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
u ()
u * (k )
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 23
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
x()
u ()
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
x()
u ()
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant
u ()
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant
u ()
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant
u ()
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant
u ()
u * (k )
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• Model Predictive Control (MPC) (aka Receding/Moving Horizon Control)
Past Future
Single input single output simple example xr
x(k + 1) = f ( x(k ), u (k ))
• At decision instant k, measure the state x(k)
• Based on x(k), compute the (optimal) sequence of
x()
controls over a prediction horizon N:
u*(x(k)) := (u*(k) , u*(k + 1) , … u*(k + N-1))
• Apply the control u*(k) on the sampling period
[k, k+ 1].
• Repeat the same steps at the next decision instant
u ()
MPC Strategy Summary1:
1. Prediction
u * (k )
2. Online optimization
3. Receding horizon implementation
k t
1Mark Cannon (2016) k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 24
• MPC Mathematical Formulation
Past Future
xr
x()
u () u * (k )
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R
x()
u () u * (k )
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R
x()
u () u * (k )
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R
x()
u () u * (k )
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R
u () u * (k )
k t
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R
k + N −1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• MPC Mathematical Formulation
Running (stage) Costs: characterizes the control
objective Past Future
xr
r 2 r 2
(x, u) = x u − x + u−u
Q R
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 25
• About of MPC1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 26
• About of MPC1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 26
• About of MPC1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 26
• About of MPC1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 26
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 27
• Considered System and Control Problem (Differential drive robots)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)
system state vector in inertial frame:
x = x y
T
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)
system state vector in inertial frame:
x = x y
T
x ( )
r + l cos • Posture rate as a function
( )
y = r + sin of the right and left wheels
2 r
( )
l
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)
system state vector in inertial frame:
x = x y
T
x ( )
r + l cos • Posture rate as a function
( )
y = r + sin of the right and left wheels
2 r
( )
l
(
r
v 2 r + l ) • Linear and angular velocities
= r
2D
(
r − l ) of the robot
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)
system state vector in inertial frame:
x = x y
T
x ( )
r + l cos • Posture rate as a function
( )
y = r + sin of the right and left wheels
2 r
( )
l
(
r
v 2 r + l ) • Linear and angular velocities
= r
2D
(
r − l ) of the robot
x v cos cos 0
y = v sin = sin v
0
0 1
• Pose as a function of robots linear velocity and Fig. 1. (a) Differential drive robot kinematic.
angular velocity (b) Pioneer 3-AT research platform
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 28
• Considered System and Control Problem (Differential drive robots)
x v cos cos 0
y = v sin = sin v
0
0 1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 29
• Considered System and Control Problem (Differential drive robots)
x v cos cos 0
y = v sin = sin v
0
0 1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 29
• Considered System and Control Problem (Differential drive robots)
x v cos cos 0
y = v sin = sin v
0
0 1
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 29
• Considered System and Control Problem (Differential drive robots)
• Control objectives
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 30
• Considered System and Control Problem (Differential drive robots)
• Control objectives
point stabilization
Point stabilization
xr (t ) xd
y (t ) = y , t
r d
r (t ) d
• reference values of the state vector
are constant over the control period
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 30
• Considered System and Control Problem (Differential drive robots)
• Control objectives
point stabilization
Point stabilization
xr (t ) xd
y (t ) = y , t
r d
r (t ) d
• reference values of the state vector
are constant over the control period
Trajectory tracking
trajectory tracking
xr (t ) xd (t )
y (t ) = y (t )
r d
r (t ) d (t )
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 30
• Control objectives
Error
+ Controller
MPC
-
Robot States
reference values
State
Sensors
Estimation
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 31
• Control objectives
Error
+ Controller
MPC
-
Robot States
reference values
State
Sensors
Estimation
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 31
• Model Predictive Control for (Differential drive robots – point stabilization)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t ))
x v cos
y = v sin
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t )) x(k + 1) = f (x(k ), u(k ))
x v cos Euler Discretization x(k + 1) x(k ) v(k ) cos (k )
y = v sin y (k + 1) = y (k ) + T v(k ) sin (k )
Sampling Time (ΔT)
(k + 1) (k ) (k )
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t )) x(k + 1) = f (x(k ), u(k ))
x v cos Euler Discretization x(k + 1) x(k ) v(k ) cos (k )
y = v sin y (k + 1) = y (k ) + T v(k ) sin (k )
Sampling Time (ΔT)
(k + 1) (k ) (k )
MPC controller
2 2
Running (stage) Costs: (x, u) = x u − x ref + u − u ref
Q R
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t )) x(k + 1) = f (x(k ), u(k ))
x v cos Euler Discretization x(k + 1) x(k ) v(k ) cos (k )
y = v sin y (k + 1) = y (k ) + T v(k ) sin (k )
Sampling Time (ΔT)
(k + 1) (k ) (k )
MPC controller
2 2
Running (stage) Costs: (x, u) = x u − x ref + u − u ref
Q R
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• Model Predictive Control for (Differential drive robots – point stabilization)
system model
x (t ) = f c (x(t ), u(t )) x(k + 1) = f (x(k ), u(k ))
x v cos Euler Discretization x(k + 1) x(k ) v(k ) cos (k )
y = v sin y (k + 1) = y (k ) + T v(k ) sin (k )
Sampling Time (ΔT)
(k + 1) (k ) (k )
MPC controller
2 2
Running (stage) Costs: (x, u) = x u − x ref + u − u ref
Q R
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 32
• OCP and NLP
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem
N −1
min J N (x 0 , u) = (x u (k ), u(k ))
u
k =0
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem
N −1
min J N (x 0 , u) = (x u (k ), u(k ))
u
k =0
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem
N −1
min J N (x 0 , u) = (x u (k ), u(k ))
u
k =0
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP
Optimal control problem (OCP): e.g. NMPC online optimization problem
N −1
min J N (x 0 , u) = (x u (k ), u(k ))
u
k =0
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 33
• OCP and NLP1
OCP NLP
N −1
min (w )
min J N (x 0 , u) = (x u (k ), u(k )) w
u
k =0
s.t. g1 (w ) 0 ,
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 , g 2 (w ) = 0,
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 34
• OCP and NLP1
Shooting Methods:
1- Single Shooting
OCP 2- Multiple shooting NLP
N −1
3- ….. min (w )
min J N (x 0 , u) = (x u (k ), u(k )) w
u
k =0
s.t. g1 (w ) 0 ,
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 , g 2 (w ) = 0,
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 34
CasADi1 https://web.casadi.org/get/
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 35
• Single Shooting Implementation using CaSAdi.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
import casadi.*
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;
X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
>> f.print_dimensions
N = 3; % prediction horizon
Number of inputs: 2
rob_diam = 0.3;
Input 0 ("i0"): 3x1
Input 1 ("i1"): 2x1
v_max = 0.6; v_min = -v_max;
Number of outputs: 1
omega_max = pi/4; omega_min = -omega_max;
Output 0 ("o0"): 3x1
x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');
states = [x;y;theta]; n_states = length(states);
X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;
X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon u 0 v0 0
U T = =
rob_diam = 0.3;
X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;
X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3; >> P
P =
v_max = 0.6; v_min = -v_max; [P_0, P_1, P_2, P_3, P_4, P_5]
omega_max = pi/4; omega_min = -omega_max;
X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;
X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon x0 y0 0
X T =
rob_diam = 0.3;
v_max = 0.6; v_min = -v_max;
omega_max = pi/4; omega_min = -omega_max; x N yN N
x = SX.sym('x'); y = SX.sym('y'); theta = SX.sym('theta');
>> X
states = [x;y;theta]; n_states = length(states);
X =
[[X_0, X_3, X_6, X_9],
v = SX.sym('v'); omega = SX.sym('omega');
[X_1, X_4, X_7, X_10],
controls = [v;omega]; n_controls = length(controls);
[X_2, X_5, X_8, X_11]]
rhs = [v*cos(theta);v*sin(theta);omega]; % system r.h.s
X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
• Single Shooting Implementation using CaSAdi.
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-
matlabR2016a-v3.4.5')
SX data type is used to represent matrices whose
import casadi.*
elements consist of symbolic expressions
T = 0.2; % sampling time [s]
N = 3; % prediction horizon
rob_diam = 0.3;
X = SX.sym('X',n_states,(N+1));
% A Matrix that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 36
% compute solution symbolically
X(:,1) = P(1:3); % initial state
for k = 1:N
st = X(:,k); con = U(:,k);
f_value = f(st,con);
st_next = st+ (T*f_value);
X(:,k+1) = st_next;
end
% this function to get the optimal trajectory knowing the optimal solution
ff=Function('ff',{U,P},{X});
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically
X(:,1) = P(1:3); % initial state >> P
for k = 1:N P =
st = X(:,k); con = U(:,k); [P_0, P_1, P_2, P_3, P_4, P_5]
f_value = f(st,con);
st_next = st+ (T*f_value);
X(:,k+1) = st_next;
end
% this function to get the optimal trajectory knowing the optimal solution
ff=Function('ff',{U,P},{X});
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically
X(:,1) = P(1:3); % initial state
for k = 1:N
st = X(:,k); con = U(:,k);
f_value = f(st,con);
st_next = st+ (T*f_value);
X(:,k+1) = st_next;
end
% this function to get the optimal trajectory knowing the optimal solution
ff=Function('ff',{U,P},{X});
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically
X(:,1) = P(1:3); % initial state 0 0x 0 y
X =
for k = 1:N T
st = X(:,k); con = U(:,k);
f_value = f(st,con);
st_next = st+ (T*f_value); N N
xN y
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
[X_2, X_5, X_8, X_11]]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically
X(:,1) = P(1:3); % initial state
for k = 1:N
st = X(:,k); con = U(:,k);
f_value = f(st,con);
st_next = st+ (T*f_value);
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
[X_2, X_5, X_8, X_11]]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically x0
X(:,1) = P(1:3); % initial state f ( x , u )
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0
f_value = f(st,con);
st_next = st+ (T*f_value);
N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
[X_2, X_5, X_8, X_11]]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically x0
X(:,1) = P(1:3); % initial state f ( x , u )
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0
f_value = f(st,con);
st_next = st+ (T*f_value);
N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
obj = 0; % Objective function [X_2, X_5, X_8, X_11]]
g = []; % constraints vector
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically x0
X(:,1) = P(1:3); % initial state f ( x , u )
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0
f_value = f(st,con);
st_next = st+ (T*f_value);
N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
obj = 0; % Objective function [X_2, X_5, X_8, X_11]]
g = []; % constraints vector
>> P
P =
Q = zeros(3,3); Q(1,1) = 1;Q(2,2) = 5;Q(3,3) = 0.1; % weighing matrices
[P_0, P_1, (states)
P_2, P_3, P_4, P_5]
R = zeros(2,2); R(1,1) = 0.5; R(2,2) = 0.05; % weighing matrices (controls)
% compute objective 2 2
for k=1:N (x, u) = x u − x r + u − u r
Q R
st = X(:,k); con = U(:,k);
−1
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate Nobj
end
J N (x, u) = (x u (k ), u(k ))
k =0
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically x0
X(:,1) = P(1:3); % initial state f ( x , u )
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0
f_value = f(st,con);
st_next = st+ (T*f_value);
N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
obj = 0; % Objective function [X_2, X_5, X_8, X_11]]
g = []; % constraints vector
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% compute solution symbolically x0
X(:,1) = P(1:3); % initial state f ( x , u )
for k = 1:N
st = X(:,k); con = U(:,k); X =
T 0 0
f_value = f(st,con);
st_next = st+ (T*f_value);
N −1 N −1
f ( x , u ) = f (f ( x N −2 , u N −2 ), u N −1
)
X(:,k+1) = st_next;
end >> X
% this function to get the optimal trajectoryX knowing
= the optimal solution
ff=Function('ff',{U,P},{X}); [[X_0, X_3, X_6, X_9],
[X_1, X_4, X_7, X_10],
obj = 0; % Objective function [X_2, X_5, X_8, X_11]]
g = []; % constraints vector
% compute constraints
for k = 1:N+1 % box constraints due to the map margins x0 y0 0
X T =
g = [g ; X(1,k)]; %state x
g = [g ; X(2,k)]; %state y
end
x N yN N
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 37
% make the decision variables one column vector
OPT_variables = reshape(U,2*N,1);
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);
opts = struct;
opts.ipopt.max_iter = 100;
opts.ipopt.print_level =0;%0,3
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
opts.ipopt.acceptable_obj_change_tol = 1e-6;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 38
% make the decision variables one column vector
OPT_variables = reshape(U,2*N,1);
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);
>> U
opts = struct; U =
opts.ipopt.max_iter = 100; [[U_0, U_2, U_4],
opts.ipopt.print_level =0;%0,3 [U_1, U_3, U_5]]
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
>> OPT_variables
opts.ipopt.acceptable_obj_change_tol = 1e-6;
OPT_variables =
[U_0, U_1, U_2, U_3, U_4, U_5]
solver = nlpsol('solver', 'ipopt', nlp_prob,opts);
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 38
% make the decision variables one column vector
OPT_variables = reshape(U,2*N,1);
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);
>> U
opts = struct; U =
opts.ipopt.max_iter = 100; [[U_0, U_2, U_4],
opts.ipopt.print_level =0;%0,3 [U_1, U_3, U_5]]
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
>> OPT_variables
opts.ipopt.acceptable_obj_change_tol = 1e-6;
OPT_variables =
[U_0, U_1, U_2, U_3, U_4, U_5]
solver = nlpsol('solver', 'ipopt', nlp_prob,opts);
>> nlp_prob
nlp_prob =
struct with fields:
f: [1×1 casadi.SX]
x: [6×1 casadi.SX]
g: [8×1 casadi.SX]
p: [6×1 casadi.SX]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 38
% make the decision variables one column vector
OPT_variables = reshape(U,2*N,1);
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);
>> U
opts = struct; U =
opts.ipopt.max_iter = 100; [[U_0, U_2, U_4],
opts.ipopt.print_level =0;%0,3 [U_1, U_3, U_5]]
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
>> OPT_variables
opts.ipopt.acceptable_obj_change_tol = 1e-6;
OPT_variables =
[U_0, U_1, U_2, U_3, U_4, U_5]
solver = nlpsol('solver', 'ipopt', nlp_prob,opts);
>> nlp_prob YI
nlp_prob =
struct with fields: 2
f: [1×1 casadi.SX]
x: [6×1 casadi.SX]
g: [8×1 casadi.SX]
p: [6×1 casadi.SX] (x, y)
args = struct; -2 O 2 XI
% inequality constraints (state constraints)
args.lbg = -2; % lower bound of the states x and y
args.ubg = 2; % upper bound of the states x and y
-2
% input constraints
args.lbx(1:2:2*N-1,1) = v_min; args.lbx(2:2:2*N,1) = omega_min;
args.ubx(1:2:2*N-1,1) = v_max; args.ubx(2:2:2*N,1) = omega_max;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 38
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0; function [t0, x0, u0] = shift(T, t0, x0, u,f)
x0 = [0 ; 0 ; 0.0]; % initial condition.
st = x0;
xs = [1.5 ; 1.5 ; 0.0]; % Reference con
posture.
= u(1,:)';
xx(:,1) = x0; % xx contains the history of states
f_value = f(st,con);
t(1) = t0; st = st+ (T*f_value);
u0 = zeros(N,2); % two control inputs
x0 = full(st);
sim_tim = 20; % Maximum simulation time
% Start MPC t0 = t0 + T;
mpciter = 0; u0 = [u(2:size(u,1),:);u(size(u,1),:)];
xx1 = []; end
u_cl=[];
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 39
% THE SIMULATION LOOP SHOULD START FROM HERE
%-------------------------------------------
t0 = 0;
x0 = [0 ; 0 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs
sim_tim = 20; % Maximum simulation time
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 40
• We Did Single shooting to transform the OCP into an NLP1
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
min (F(w, x 0 , t k ), w )
w
s.t. g1 (F(w, x 0 , t k ), w ) 0,
1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) = (x u (k ), u(k ))
u
k =0
s.t. : x u (k + 1) = f (x u (k ), u(k )) ,
x u (0) = x 0 ,
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) = (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k ) U , k [0, N − 1]
x u (k ) X , k [0, N ]
1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) = (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k ) U , k [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k ) X , k [0, N ]
1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) = (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k ) U , k [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k ) X , k [0, N ]
NLP
w = u 0 u N −1 , x 0 xN Problem Decision variables
1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) = (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k ) U , k [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k ) X , k [0, N ]
NLP
w = u 0 u N −1 , x 0 xN Problem Decision variables
min (w )
w
1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) = (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k ) U , k [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k ) X , k [0, N ]
NLP
w = u 0 u N −1 , x 0 xN Problem Decision variables
min (w )
w g1 (x 0 , u 0 ) x0 − x0
f (x , u ) − x
subject to : g1 (w ) = 0, g (w ) = 0 0 1 =0
g1 (x N −1 , u N −1 ) 2
g 1 ( x N ) N −1 N −1
f ( x , u ) − x N
1Joel
Andersson, introduction to casadi, 2015
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 42
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
OCP
N −1
min J N (x 0 , u) = (x u (k ), u(k )) • x will become decision variables in the
u
k =0 optimization problem as well as u.
s.t. : x u (k + 1) = f (x u (k ), u(k )) , • Apply additional path constraints at each
x u (0) = x 0 , optimization step (shooting step), i.e.
u(k ) U , k [0, N − 1]
x(k + 1) − f (x(k ), u(k )) = 0
x u (k ) X , k [0, N ]
NLP
w = u 0 u N −1 , x 0 xN Problem Decision variables
min (w )
w g1 (x 0 , u 0 ) x0 − x0
f (x , u ) − x
subject to : g1 (w ) = 0, g (w ) = 0 0 1 =0
g1 (x N −1 , u N −1 ) 2
g 1 ( x N ) N −1 N −1
f ( x , u ) − x N
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 43
• Multiple Shooting to transform the OCP into an NLP1
Key idea is to break down the system integration into short time intervals, i.e. use
the system model as a state constraint at each optimization step.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 43
MATLAB Code using CasADi package (Multiple Shooting)
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
T = 0.2; %[s]
N = 3; % prediction horizon
rob_diam = 0.3;
2 2 x0 − x0
(x, u) = x u − x r + u − ur f (x , u ) − x
Q R
N −1 g 2 (w ) = 0 0 1 =0
J N (x, u) = (x u (k ), u(k ))
k =0
N −1 N −1
f ( x , u ) − x N
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 45
st = X(:,1); % initial state
g = [g;st-P(1:3)]; % initial condition constraints
for k = 1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end
2 2 x0 − x0
(x, u) = x u − x r + u − ur f (x , u ) − x
Q R
N −1 g 2 (w ) = 0 0 1 =0
J N (x, u) = (x u (k ), u(k ))
k =0
N −1 N −1
f ( x , u ) − x N
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 45
args = struct; YI
2
args.lbg(1:3*(N+1)) = 0; % -1e-20 % Equality constraints
args.ubg(1:3*(N+1)) = 0; % 1e-20 % Equality constraints
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 46
args = struct; YI
2
args.lbg(1:3*(N+1)) = 0; % -1e-20 % Equality constraints
args.ubg(1:3*(N+1)) = 0; % 1e-20 % Equality constraints
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 46
% Start MPC
mpciter = 0; xx1 = []; u_cl=[];
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 47
% Start MPC
mpciter = 0; xx1 = []; u_cl=[];
YI
(x, y)
O XI
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint
YI
(xo, yo)
(x, y)
O XI
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint
(xo, yo)
(x, y)
O XI
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint
(xo, yo)
(x, y)
O XI
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint
(x − xo )2 + ( y − yo )2 rr + ro
(x − xo )2 + ( y − yo )2 − (r + ro ) 0
(xo, yo)
− (x − xo )2
+ ( y − yo ) + (r + ro ) 0
2
(x, y)
O XI
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
• Adding an obstacle as a path constraint
(x − xo )2 + ( y − yo )2 rr + ro
(x − xo )2 + ( y − yo )2 − (r + ro ) 0
(xo, yo)
− (x − xo )2
+ ( y − yo ) + (r + ro ) 0
2
O XI
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 48
MATLAB Code using CasADi package
% CasADi v3.4.5
addpath('C:\Users\mehre\OneDrive\Desktop\CasADi\casadi-windows-matlabR2016a-v3.4.5')
import casadi.*
T = 0.2; %[s]
N = 14; % prediction horizon
rob_diam = 0.3;
X = SX.sym('X',n_states,(N+1));
% A vector that represents the states over the optimization problem.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 49
st = X(:,1); % initial state
g = [g;st-P(1:3)]; % initial condition constraints
for k = 1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 50
st = X(:,1); % initial state
g = [g;st-P(1:3)]; % initial condition constraints
for k = 1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end
% Add constraints for collision avoidance
obs_x = 0.5; % meters
obs_y = 0.5; % meters ( ) (
− x − xo + y − yo + r + ro 0
2 2
) ( )
obs_diam = 0.3; % meters
for k = 1:N+1 % box constraints due to the map margins
g = [g ; -sqrt((X(1,k)-obs_x)^2+(X(2,k)-obs_y)^2) + (rob_diam/2 + obs_diam/2)];
end
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 50
st = X(:,1); % initial state
g = [g;st-P(1:3)]; % initial condition constraints
for k = 1:N
st = X(:,k); con = U(:,k);
obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
st_next = X(:,k+1);
f_value = f(st,con);
st_next_euler = st+ (T*f_value);
g = [g;st_next-st_next_euler]; % compute constraints
end
% Add constraints for collision avoidance
obs_x = 0.5; % meters
obs_y = 0.5; % meters ( ) (
− x − xo + y − yo + r + ro 0
2 2
) ( )
obs_diam = 0.3; % meters
for k = 1:N+1 % box constraints due to the map margins
g = [g ; -sqrt((X(1,k)-obs_x)^2+(X(2,k)-obs_y)^2) + (rob_diam/2 + obs_diam/2)];
end
% make the decision variable one column vector
OPT_variables = [reshape(X,3*(N+1),1);reshape(U,2*N,1)];
nlp_prob = struct('f', obj, 'x', OPT_variables, 'g', g, 'p', P);
opts = struct;
opts.ipopt.max_iter = 100;
opts.ipopt.print_level =0;%0,3
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
opts.ipopt.acceptable_obj_change_tol = 1e-6;
solver = nlpsol('solver', 'ipopt', nlp_prob,opts);
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 50
args = struct;
args.lbg(1:3*(N+1)) = 0; % equality constraints
args.ubg(1:3*(N+1)) = 0; % equality constraints
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 51
args = struct;
args.lbg(1:3*(N+1)) = 0; % equality constraints − (x − xo )2 + ( y − yo )2 + (r + ro ) 0
args.ubg(1:3*(N+1)) = 0; % equality constraints
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 51
args = struct;
args.lbg(1:3*(N+1)) = 0; % equality constraints − (x − xo )2 + ( y − yo )2 + (r + ro ) 0
args.ubg(1:3*(N+1)) = 0; % equality constraints
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 51
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 52
MATLAB DEMO
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 53
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 54
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 55
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 55
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 55
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
x(k + 1) = f ( x(k ), u (k )) + x
y (k ) = h( x(k )) + y = x(k ) + y
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
x(k + 1) = f ( x(k ), u (k )) + x
y (k ) = h( x(k )) + y = x(k ) + y
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.
k t
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.
k t
k − N MHE
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.
k t
k − N MHE
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE) (aka Receding Horizon Estimation)
y (k ) = h( x(k )) + y = x(k ) + y
Sensors (y)
measurements
• At a certain time step k, acquire past
measurements over a window of size NMHE.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 56
• Moving Horizon Estimation (MHE)
MHE Estimate
measurement
Sensors (y)
measurements
Odometery
(inputs)
k t
k − N MHE
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 57
• Moving Horizon Estimation (MHE)
MHE Estimate
Advantages of MHE measurement
k t
k − N MHE
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 57
• Considered System and state estimation problem (Differential drive robots)
Robot States
reference values
Error
+ Controller
MPC
-
✓
ˆx = xˆ yˆ ˆ
T
State
Estimation
Sensor
Estimated State
MHE
vector
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 58
• Considered System and state estimation problem (Differential drive robots)
Robot States
reference values
Error
+ Controller
MPC
-
✓
ˆx = xˆ yˆ ˆ
T
State
Estimation
Sensor
Estimated State
MHE
vector
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 58
• Motion and Measurement Models (Differential drive robots)
Disturbed Motion Model
x(k + 1) = f (x(k ), u (k )) YI
x(k + 1) x(k ) v (k ) cos (k )
y (k + 1) = y (k ) + T v (k ) sin (k )
(k + 1) (k ) (k )
O XI
Beacon: relative measurements sensor
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 59
• Motion and Measurement Models (Differential drive robots)
Disturbed Motion Model
x(k + 1) = f (x(k ), u (k )) YI
x(k + 1) x(k ) v (k ) cos (k )
y (k + 1) = y (k ) + T v (k ) sin (k )
(k + 1) (k ) (k )
O XI
Beacon: relative measurements sensor
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 59
• Motion and Measurement Models (Differential drive robots)
Disturbed Motion Model
x(k + 1) = f (x(k ), u (k )) YI
x(k + 1) x(k ) v (k ) cos (k )
y (k + 1) = y (k ) + T v (k ) sin (k )
(k + 1) (k ) (k )
(.) = (.) +
O XI
Beacon: relative measurements sensor
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 59
• Motion and Measurement Models (Differential drive robots)
Disturbed Motion Model
x(k + 1) = f (x(k ), u (k )) YI
x(k + 1) x(k ) v (k ) cos (k )
y (k + 1) = y (k ) + T v (k ) sin (k )
(k + 1) (k ) (k )
(.) = (.) +
v , : XI
O
Additive Gaussian Noise with
Beacon: relative measurements sensor
standard deviations σv, and σω
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 59
• Motion and Measurement Models (Differential drive robots)
Measurement Model
y (k ) = h(x(k )) + ν y
YI
x2 + y2
r
= + r
arctan
y
x
O XI
Beacon: relative measurements sensor
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Motion and Measurement Models (Differential drive robots)
Measurement Model
y (k ) = h(x(k )) + ν y
YI
x2 + y2
r
= + r
arctan
y
x
r : relative range measurement
α : relative bearing measurement
O XI
Beacon: relative measurements sensor
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Motion and Measurement Models (Differential drive robots)
Measurement Model
y (k ) = h(x(k )) + ν y
YI
x2 + y2
r
= + r
arctan
y
x
r : relative range measurement
α : relative bearing measurement
r , :
r Robot State Vector
Additive Gaussian Noise with
standard deviations σr, and σα
x y
T
O XI
Beacon: relative measurements sensor
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Motion and Measurement Models (Differential drive robots)
Measurement Model
y (k ) = h(x(k )) + ν y
YI
x2 + y2
r
= + r
arctan
y
x
r : relative range measurement
α : relative bearing measurement
r , :
r Robot State Vector
Additive Gaussian Noise with
standard deviations σr, and σα
x y
T
Observable?
O XI
Beacon: relative measurements sensor
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Motion and Measurement Models (Differential drive robots)
Measurement Model
y (k ) = h(x(k )) + ν y
YI
x2 + y2
r
= + r
arctan
y
x
r : relative range measurement
α : relative bearing measurement
r , :
r Robot State Vector
Additive Gaussian Noise with
standard deviations σr, and σα
x y
T
Observable?
Yes, but ONLY when the linear XI
O
speed v(.) is nonzero.
Beacon: relative measurements sensor
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 60
• Moving Horizon Estimation (MHE) Tuning
Sensors (y)
min J N MHE (x, u) measurements
x ,u
s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i ) U , i [k − N MHE , k − 1]
x u (i ) X , i [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)
k t
k−N
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Moving Horizon Estimation (MHE) Tuning
Sensors (y)
min J N MHE (x, u) measurements
x ,u
s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i ) U , i [k − N MHE , k − 1]
x u (i ) X , i [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)
v , : Additive Gaussian Noise with
standard deviations σv, and σω
k t
k−N
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Moving Horizon Estimation (MHE) Tuning
Sensors (y)
min J N MHE (x, u) measurements
x ,u
s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i ) U , i [k − N MHE , k − 1]
x u (i ) X , i [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)
v , : Additive Gaussian Noise with
standard deviations σv, and σω
k t
−1 k−N
v 0
W=
0
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Moving Horizon Estimation (MHE) Tuning
Sensors (y)
min J N MHE (x, u) measurements
x ,u
s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i ) U , i [k − N MHE , k − 1]
x u (i ) X , i [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)
v , : Additive Gaussian Noise with
standard deviations σv, and σω
r , : Additive Gaussian Noise with
k t
standard deviations σr, and σα
−1 k−N
v 0
W=
0
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Moving Horizon Estimation (MHE) Tuning
Sensors (y)
min J N MHE (x, u) measurements
x ,u
s.t. : x u (i + 1) − f (x u (i ), u(i )) = 0
u(i ) U , i [k − N MHE , k − 1]
x u (i ) X , i [k − N MHE , k ]
MHE tuning matrices are Odometery
(inputs)
v , : Additive Gaussian Noise with
standard deviations σv, and σω
r , : Additive Gaussian Noise with
k t
standard deviations σr, and σα
−1 −1 k−N
r 0 v 0
V= W=
0 0
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 61
• Simulating the disturbed system and visualizing simulation results
t0 = 0;
x0 = [0.1 ; 0.1 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs for each robot
X0 = repmat(x0,1,N+1)'; % initialization of the states decision variables
sim_tim = 20; % total sampling times
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
while(norm((x0-xs),2) > 0.05 && mpciter < sim_tim / T)
args.p = [x0;xs]; % set the values of the parameters vector
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N+1),1);reshape(u0',2*N,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x(3*(N+1)+1:end))',2,N)'; % get controls only from the solution
xx1(:,1:3,mpciter+1)= reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get sol. TRAJECTORY
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
% Apply the control and shift the solution
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
X0 = reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get solution TRAJECTORY
% Shift trajectory to initialize the next step
X0 = [X0(2:end,:);X0(end,:)];
mpciter = mpciter + 1;
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 62
• Simulating the disturbed system and visualizing simulation results
t0 = 0;
x0 = [0.1 ; 0.1 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
function [t0, x0, u0] = shift(T, t0, x0, u,f)
t(1) = t0;
% add noise to the control actions before applying it
u0 = zeros(N,2); % two control inputs for each robot
con_cov = diag([0.005 deg2rad(2)]).^2;
X0 = repmat(x0,1,N+1)'; % initialization of the states decision variables
con = u(1,:)' + sqrt(con_cov)*randn(2,1);
sim_tim = 20; % total sampling times
st = x0;
% Start MPC
mpciter = 0;
f_value = f(st,con);
xx1 = [];
st = st+ (T*f_value);
u_cl=[];
while(norm((x0-xs),2) > 0.05 && mpciter < sim_tim / T)
x0 = full(st);
args.p = [x0;xs]; % set the values of the parameters vector
t0 = t0 + T;
% initial value of the optimization variables
u0 = [u(2:size(u,1),:);u(size(u,1),:)]; % shift the control action
args.x0 = [reshape(X0',3*(N+1),1);reshape(u0',2*N,1)];
end
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x(3*(N+1)+1:end))',2,N)'; % get controls only from the solution
xx1(:,1:3,mpciter+1)= reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get sol. TRAJECTORY
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
% Apply the control and shift the solution
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
X0 = reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get solution TRAJECTORY
% Shift trajectory to initialize the next step
X0 = [X0(2:end,:);X0(end,:)];
mpciter = mpciter + 1;
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 62
• Simulating the disturbed system and visualizing simulation results
t0 = 0;
x0 = [0.1 ; 0.1 ; 0.0]; % initial condition.
xs = [1.5 ; 1.5 ; 0.0]; % Reference posture.
xx(:,1) = x0; % xx contains the history of states
t(1) = t0;
u0 = zeros(N,2); % two control inputs for each robot
X0 = repmat(x0,1,N+1)'; % initialization of the states decision variables
sim_tim = 20; % total sampling times
% Start MPC
mpciter = 0;
xx1 = [];
u_cl=[];
while(norm((x0-xs),2) > 0.05 && mpciter < sim_tim / T)
args.p = [x0;xs]; % set the values of the parameters vector
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N+1),1);reshape(u0',2*N,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
u = reshape(full(sol.x(3*(N+1)+1:end))',2,N)'; % get controls only from the solution
xx1(:,1:3,mpciter+1)= reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get sol. TRAJECTORY
u_cl= [u_cl ; u(1,:)];
t(mpciter+1) = t0;
% Apply the control and shift the solution
[t0, x0, u0] = shift(T, t0, x0, u,f);
xx(:,mpciter+2) = x0;
X0 = reshape(full(sol.x(1:3*(N+1)))',3,N+1)'; % get solution TRAJECTORY
% Shift trajectory to initialize the next step
X0 = [X0(2:end,:);X0(end,:)];
mpciter = mpciter + 1;
end;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 62
• Simulating the disturbed system and visualizing simulation results
In this presentation, we are doing the state estimation offline, i.e. we simulate the
disturbed system first, synthesize the measurements, and finally apply the state
estimator (MHE)
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 63
• Simulating the disturbed system and visualizing simulation results
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 64
• Simulating the disturbed system and visualizing simulation results
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 64
• Simulating the disturbed system and visualizing simulation results
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 64
• Simulating the disturbed system and visualizing simulation results
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 65
• Simulating the disturbed system and visualizing simulation results
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 65
• Implementation of MHE using Matlab + Casadi Package
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements.
%-------------------------------------------------------------------------
u_cl;
y_measurements;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements. u_cl = y_measurements =
%-------------------------------------------------------------------------
0.6000 0.7854 0.3438 0.7066
u_cl; 0.6000 0.7854 0.4655 0.4368
0.6000 0.7854 0.4598 0.2791
y_measurements; 0.6000 0.7854 0.4211 0.3232
0.6000 0.7854 0.6413 0.3020
0.6000 0.7854 0.7621 0.3619
0.6000 0.7854 0.8330 0.4820
0.6000 0.7373 0.9660 0.5572
0.6000 0.0666 1.1363 0.5600
0.6000 -0.1554 1.1598 0.6155
0.6000 -0.3171 1.1015 0.7358
0.6000 -0.4021 1.3396 0.7475
0.6000 -0.4870 1.5397 0.8035
0.6000 -0.5883 1.5984 0.8494
0.6000 -0.6573 1.7741 0.8272
0.5870 -0.6576 1.7323 0.8000
0.3723 -0.5833 2.0366 0.7612
0.2422 -0.4932 1.9872 0.7414
0.1599 -0.3761 2.1376 0.8318
0.1081 -0.3184 2.0642 0.7928
0.0741 -0.2628 1.8335 0.8108
0.0483 -0.2088 1.8759 0.7057
0.0297 -0.1514 2.1112 0.7509
0.0201 -0.1320 2.1490 0.8060
0.0119 -0.1031 2.1969 0.7570
0.0073 -0.0791 2.1573 0.7842
0.0037 -0.0647 2.1962 0.7988
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements.
%-------------------------------------------------------------------------
u_cl;
y_measurements;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements.
%-------------------------------------------------------------------------
u_cl;
y_measurements;
T = 0.2; %[s]
N_MHE = size(y_measurements,1)-1; % Estimation horizon
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
• Implementation of MHE using Matlab + Casadi Package
% The following two matrices contain what we know about the system, i.e.
% the nominal control actions applied (measured) and the range and bearing
% measurements.
%-------------------------------------------------------------------------
u_cl;
y_measurements; First, we will estimate the whole
closed loop trajectory in one MHE
T = 0.2; %[s] step
N_MHE = size(y_measurements,1)-1; % Estimation horizon
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 66
% Decision variables
U = SX.sym('U',n_controls,N_MHE); %(controls)
X = SX.sym('X',n_states,(N_MHE+1)); %(states) [remember multiple shooting]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 67
% Decision variables
U = SX.sym('U',n_controls,N_MHE); %(controls)
X = SX.sym('X',n_states,(N_MHE+1)); %(states) [remember multiple shooting]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 67
% Decision variables
U = SX.sym('U',n_controls,N_MHE); %(controls)
X = SX.sym('X',n_states,(N_MHE+1)); %(states) [remember multiple shooting]
opts = struct;
opts.ipopt.max_iter = 2000;
opts.ipopt.print_level =0;%0,3
opts.print_time = 0;
opts.ipopt.acceptable_tol =1e-8;
opts.ipopt.acceptable_obj_change_tol = 1e-6;
args = struct;
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 69
% MHE Simulation
%------------------------------------------
U0 = zeros(N_MHE,2); % two control inputs for each robot
X0 = zeros(N_MHE+1,3); % initialization of the states decision variables
k=1;
% Get the measurements window and put it as parameters in p
args.p = [y_measurements(k:k+N_MHE,:)',u_cl(k:k+N_MHE-1,:)'];
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N_MHE+1),1);reshape(U0',2*N_MHE,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
U_sol = reshape(full(sol.x(3*(N_MHE+1)+1:end))',2,N_MHE)';
% get controls only from the solution
X_sol = reshape(full(sol.x(1:3*(N_MHE+1)))',3,N_MHE+1)';
% get solution TRAJECTORY
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 69
% MHE Simulation
%------------------------------------------
U0 = zeros(N_MHE,2); % two control inputs for each robot
X0 = zeros(N_MHE+1,3); % initialization of the states decision variables
k=1;
% Get the measurements window and put it as parameters in p
args.p = [y_measurements(k:k+N_MHE,:)',u_cl(k:k+N_MHE-1,:)'];
% initial value of the optimization variables
args.x0 = [reshape(X0',3*(N_MHE+1),1);reshape(U0',2*N_MHE,1)];
sol = solver('x0', args.x0, 'lbx', args.lbx, 'ubx', args.ubx,...
'lbg', args.lbg, 'ubg', args.ubg,'p',args.p);
U_sol = reshape(full(sol.x(3*(N_MHE+1)+1:end))',2,N_MHE)';
% get controls only from the solution
X_sol = reshape(full(sol.x(1:3*(N_MHE+1)))',3,N_MHE+1)';
% get solution TRAJECTORY
figure(1)
subplot(311)
plot(t,X_sol(:,1),'g','linewidth',1.5); hold on
legend('Ground Truth','Measurement','MHE')
subplot(312)
plot(t,X_sol(:,2),'g','linewidth',1.5); hold on
subplot(313)
plot(t,X_sol(:,3),'g','linewidth',1.5); hold on
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 69
• Estimation results
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 70
% MHE Simulation loop starts here
%------------------------------------------
X_estimate = []; % X_estimate contains the MHE estimate of the states
U_estimate = []; % U_estimate contains the MHE estimate of the controls
U0 = zeros(N_MHE,2); % two control inputs for each robot
X0 = zeros(N_MHE+1,3); % initialization of the states decision variables
% Start MHE
mheiter = 0;
N MHE = 6 N MHE = 8
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 72
• Estimation results
Observable?
Yes, but ONLY when the linear speed v(.) is nonzero.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 73
Agenda
Part 0
Background and Motivation Examples
▪ Background
▪ Motivation Examples.
Part I
Model Predictive Control (MPC)
▪ What is (MPC)?
▪ Mathematical Formulation of MPC.
▪ About MPC and Related Issues.
MPC Implementation to Mobile Robots control
▪ Considered System and Control Problem.
▪ OCP and NLP
▪ Single Shooting Implementation using CaSAdi.
▪ Multiple Shooting Implementation using CaSAdi.
▪ Adding obstacle (path constraints) + implementation
Part II
MHE and implementation to state estimation
▪ Mathematical formulation of MHE
▪ Implementation to a state estimation problem in mobile robots
Conclusions
▪ Concluding remarks about MPC and MHE.
▪ What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 74
• Concluding remarks about MPC and MHE
• Terminologies you are familiar with now
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 75
• Concluding remarks about MPC and MHE
• Terminologies you are familiar with now
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 75
• Concluding remarks about MPC and MHE
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 76
• Concluding remarks about MPC and MHE
• MPC and MHE are optimization based methods for control and state
estimation.
• Both Methods can be applied to nonlinear MIMO systems.
• Their mathematical formulations are similar (i.e. OCPs).
• Physical constraints can be easily incorporated in the related OCPs.
• In order to solve a given OCP numerically, we need to transform it to
a nonlinear programming problem (NLP).
• Single shooting and multiple shooting are methods to express an
OCP as an NLP.
• Implementation of MPC and MHE can be fairly straightforward using
off-the-shelf-software packages, e.g. CasADi.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 76
• What is NEXT?
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 77
• What is NEXT?
• The given code can be adapted, in a general sense, to any similar MIMO system.
• Implementation can be accelerated using the code generation feature in Casadi,
see the manual for more details.
• For real-time implementation, you may also consider looking at ACADO-toolkit
(Currently, not as supported as Casadi); a possibly newer version of ACADO is
known as ACADOS (https://github.com/acados)
• Combining MHE with MPC. A very good Exercise! ☺
• Distributed Model Predictive Control (DMPC).
• …
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 77
• What is NEXT?
• The given code can be adapted, in a general sense, to any similar MIMO system.
• Implementation can be accelerated using the code generation feature in Casadi,
see the manual for more details.
• For real-time implementation, you may also consider looking at ACADO-toolkit
(Currently, not as supported as Casadi); a possibly newer version of ACADO is
known as ACADOS (https://github.com/acados)
• Combining MHE with MPC. A very good Exercise! ☺
• Distributed Model Predictive Control (DMPC).
• …
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 77
• Publications
• Mohamed W. Mehrez, Optimization Based Solutions for Control and State Estimation in
Non-holonomic Mobile Robots: Stability, Distributed Control, and Relative Localization,
PhD Thesis, Memorial University of Newfoundland.
• MPC Stability
• Karl Worthmann, Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, Jürgen
Pannek, Interaction of Open and Closed Loop Control in MPC, Automatica, vol. 82, no. -,
pp. 243-250, 2017.
• Mohamed W. Mehrez, Karl Worthmann, George K. I. Mann, Raymond G. Gosine, Timm
Faulwasser, Predictive Path Following of Mobile Robots without Terminal Stabilizing
Single-Robot Control
Constraints, in Proceedings of the IFAC 2017 World Congress, Toulouse, France, 2017.
• Mohamed W. Mehrez, Karl Worthmann, George K. I. Mann, Raymond G. Gosine, Jürgen
Pannek, Experimental Speedup and Stability Validation for Multi-Step MPC, in Proceedings
of the IFAC 2017 World Congress, Toulouse, France, 2017.
• Karl Worthmann, Mohamed W. Mehrez, Mario Zanon, George K. I. Mann, Raymond G.
Gosine, Moritz Diehl, Model Predictive Control of Nonholonomic Mobile Robots without
Stabilizing Constraints and Costs, IEEE Transactions on Control Systems Technology, vol. 24,
no. 4, pp. 1394-1406, 2016.
• Karl Worthmann, Mohamed W. Mehrez, Mario Zanon, George K. I. Mann, Raymond G.
Gosine, Moritz Diehl, Regulation of Differential Drive Robots Using Continuous Time MPC
Without Stabilizing Constraints or Costs, Proceedings of the 5th IFAC Conference on
Nonlinear Model Predictive Control (NPMC’15), Sevilla, Spain, pp. 129-135, Spain, 2015.
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 78
• Publications
• MPC and MHE Implementation Studies
• Mohamed W. Mehrez, Tobias Sprodowski, Karl Worthmann, George K. I. Mann, Raymond G.
Multi-Robot Control
Gosine, Juliana K. Sagawa, Jürgen Pannek, Occupancy Grid based Distributed MPC for Mobile
Robots, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2017), Vancouver, Canada.
• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, "Formation stabilization of
nonholonomic robots using nonlinear model predictive control," IEEE 27th Canadian Conference
on Electrical and Computer Engineering (CCECE), pp.1-6, Toronto, ON, Canada, 2014.
• Tobias Sprodowski, Mohamed W. Mehrez, Karl Worthmann, George K.I. Mann, Raymond G.
Gosine, Juliana K. Sagawa, Jürgen Pannek, Differential communication with distributed MPC
based on occupancy grid, Information Sciences, Volume 453, 2018
• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, An Optimization Based Approach
for Relative Localization and Relative Tracking Control in Multi-Robot Systems, Journal of
Intelligent and Robotic Systems, vol. 85, no. 2, pp. 385–408, 2017.
Single-Robot Control MHE
• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, "Nonlinear moving horizon state
estimation for multi-robot relative localization," IEEE 27th Canadian Conference on Electrical and
Computer Engineering (CCECE), pp.1-5, Toronto, ON, Canada, 2014.
• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, " Comparison of Stabilizing NMPC
Designs for Wheeled Mobile Robots: an Experimental Study," Moratuwa Engineering Research
Conference, pp. 130-135, Moratuwa, Sri Lanka, 2015.
• Mohamed W. Mehrez, George K. I. Mann, Raymond G. Gosine, "Stabilizing NMPC of wheeled
mobile robots using open-source real-time software," 16th International Conference on
Advanced Robotics (ICAR), pp.1-6, Uruguay, 2013.
Thank you! [email protected]
Optimization based Solutions for Control & Estimation Waterloo, January, 2019 M. W. Mehrez 79