Flight Control Backstepping
Flight Control Backstepping
REGL
AU
ERTEKNIK
OL
TOM
ATIC CONTR
LINKPING
ISBN 91-7219-995-4
ISSN 0280-7971
LiU-TEK-LIC-2001:12
Printed by UniTryck, Link
oping, Sweden 2001
To Eva
Abstract
Aircraft flight control design is traditionally based on linear control theory, due
to the existing wealth of tools for linear design and analysis. However, in order
to achieve tactical advantages, modern fighter aircraft strive towards performing
maneuvers outside the region where the dynamics of flight are linear, and the need
for nonlinear tools arises.
In this thesis we investigate backstepping as a new framework for nonlinear flight
control design. Backstepping is a recently developed design tool for constructing
globally stabilizing control laws for a certain class of nonlinear dynamic systems.
Flight control laws for two different control objectives are designed. First, general
purpose maneuvering is considered, where the angle of attack, the sideslip angle,
and the roll rate are the controlled variables. Second, automatic control of the
flight path angle control is considered.
The key idea of the backstepping designs is to benefit from the naturally stabilizing aerodynamic forces acting on the aircraft. The resulting state feedback control
laws thereby rely on less knowledge of these forces compared to control laws based
on feedback linearization, which today is the prevailing nonlinear design technique
within aircraft flight control.
The backstepping control laws are shown to be inverse optimal with respect to
meaningful cost functionals. This gives the controllers certain gain margins which
implies that stability is preserved for a certain amount of control surface saturation.
Also, the problem of handling a model error appearing at the input of a nonlinear dynamic system is treated, by considering the model error as an unknown,
additive disturbance. Two schemes, based on adaptive backstepping and nonlinear
observer design, are proposed for estimating and adapting to such a disturbance.
These are used to deal with model errors in the description of the aerodynamic
moments acting on the aircraft.
The designed control laws are evaluated using realistic aircraft simulation models and the results are highly encouraging.
Acknowledgments
First of all, I want to thank Professor Lennart Ljung for drafting me to the Automatic Control group in Link
oping, and hereby giving me the opportunity to perform
research within a most professional, ambitious, and inspiring group of people. I
also want to thank my supervisors Professor Torkel Glad and Karin St
ahl Gunnarsson for their guidance and expertise within nonlinear control theory and aircraft
control, respectively.
Besides these key persons in particular and the Automatic Control group in general, a few people deserve an explicit Thank you!: Fredrik Tjarnstr
om, Mikael
Norrl
of, and Jacob Roll proofread the thesis and provided valuable comments, significantly increasing the quality of the result. Anders Helmersson shared his practical flight experience and commented on the computer simulation results. Ingegerd
Skoglund and Mikael R
onnqvist at the Department of Mathematics suggested the
numerical schemes which were used for control allocation in the implementation of
the controllers.
And now to something completely different: Ett stort tack till sl
akt och v
anner,
och inte minst till k
arestan, som uth
ardat den senaste tiden d
a jag levt i ett socialt
vakuum, och som st
ottat mig i v
att och torrt!
This work was sponsored by the graduate school ECSEL.
Ola H
arkeg
ard
Link
oping, March 2001
iii
Contents
1 Introduction
1.1 Introductory Example: Sideslip Regulation . . . . . . . . . . . . . .
1.2 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Aircraft Primer
2.1 The Impact of Automatic Control . . . . . . . . .
2.2 Control Objectives . . . . . . . . . . . . . . . . . .
2.3 Control Means . . . . . . . . . . . . . . . . . . . .
2.4 Aircraft Dynamics . . . . . . . . . . . . . . . . . .
2.4.1 Governing physics . . . . . . . . . . . . . .
2.4.2 Modeling for control . . . . . . . . . . . . .
2.5 Current Approaches to Flight Control Design . . .
2.5.1 Gain-scheduling . . . . . . . . . . . . . . . .
2.5.2 Dynamic inversion (feedback linearization)
2.5.3 Other nonlinear approaches . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
4
4
7
7
9
12
12
13
16
20
20
21
24
vi
Contents
3 Backstepping
3.1 Lyapunov Theory . . . . . . . . . . . . . . . . . . . .
3.2 Lyapunov Based Control Design . . . . . . . . . . .
3.3 Backstepping . . . . . . . . . . . . . . . . . . . . . .
3.3.1 Main result . . . . . . . . . . . . . . . . . . .
3.3.2 Which systems can be handled? . . . . . . . .
3.3.3 Which design choices are there? . . . . . . . .
3.4 Related Lyapunov Designs . . . . . . . . . . . . . . .
3.4.1 Forwarding . . . . . . . . . . . . . . . . . . .
3.4.2 Adaptive, robust, and observer backstepping
3.5 Applications of Backstepping . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
29
30
32
33
34
36
37
42
43
43
43
45
45
46
48
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
51
51
52
54
62
65
65
66
67
72
72
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
75
75
76
78
79
79
81
82
86
86
86
87
87
87
88
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
7.2
7.3
7.1.3 FDC . . . . . . . . . .
Controller Implementation . .
7.2.1 Control allocation . .
Simulation . . . . . . . . . . .
7.3.1 Conditions . . . . . .
7.3.2 Controller parameters
7.3.3 Simulation results . .
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
88
89
90
92
92
92
93
101
Bibliography
103
Notation
Symbols
R
x1 , . . . , xn
x = (x1 xn )T
u
k(x)
V (x)
xref
1
xdes
1
e
e
Operators
p
kxk = x21 + . . . + x2n
V = dV
dt
Euclidian norm
time derivative of V
ix
Notation
V 0 (x1 ) = dVdx(x11 )
Vx (x) = ( Vx(x)
V (x)
xn )
Acronyms
clf
GAS
NDI
TVC
GAM
HIRM
Aircraft nomenclature
State variables
Symbol
= (, , )T
= (p, q, r)T
s = (ps , qs , rs )T
p
q
r
p = (pN , pE , h)T
pN
pE
h
V = (u, v, w)T
u
v
w
VT
M
nz
Unit
rad
rad
rad
rad
rad
rad
rad/s
rad/s
rad/s
m
m
m
m/s
m/s
m/s
m/s
g
Definition
angle of attack
sideslip angle
flight path angle
aircraft orientation (Euler angles)
roll angle
pitch angle
yaw angle
body-axes angular velocity
stability-axes angular velocity
roll rate
pitch rate
yaw rate
aircraft position
position north
position east
altitude
body-axes velocity vector
longitudinal velocity
lateral velocity
normal velocity
total velocity
Mach number
normal acceleration, load factor
Notation
xi
es
ed
cs
cd
r
Unit
rad
rad
rad
rad
rad
Definition
collective representation of all control surfaces
symmetrical elevon deflection
differential elevon deflection
symmetrical canard deflection
differential canard deflection
rudder deflection
Unit
kg
Definition
aircraft mass
kg m2
m2
m
m
m
Unit
kg/m3
N/m2
Definition
air density
dynamic pressure
Unit
m/s2
N
N
N
N
Nm
Nm
Nm
Definition
acceleration due to gravity
engine thrust force
drag force
lift force
side force
rolling moment
pitching moment
yawing moment
Aircraft data
Symbol
m
Ix
I= 0
Ixz
S
b
c
ZT P
0
Iy
0
Ixz
0
Iz
Atmosphere
Symbol
Coordinate systems
Symbol
(xb , yb , zb )
(xs , ys , zs )
(xw , yw , zw )
Definition
body-axes coordinate system
stability-axes coordinate system
wind-axes coordinate system
xii
Notation
1
Introduction
During the past 15 years, several new design methods for control of nonlinear dynamic systems have been invented. One of these methods is known as backstepping.
Backstepping allows a designer to methodically construct stabilizing control laws
for a certain class of nonlinear systems.
Parallel to this development within nonlinear control theory, one finds a desire
within aircraft technology to push the performance limits of fighter aircraft towards
supermaneuverability. By utilizing high angles of attack, tactical advantages
can be achieved, as demonstrated by Herbst [32] and Well et al. [77], who consider
aircraft reversal maneuvers for performance evaluation. The aim is for the aircraft
to return to a point of departure at the same speed and altitude but with an
opposite heading at minimum time. It is shown that using high angles of attack
during the turn, the aircraft is able to maneuver in less air space and complete the
maneuver in shorter time. These types of maneuvers are performed outside the
region where the dynamics of flight are linear. Thus, linear control design tools,
traditionally used for flight control design, are no longer sufficient.
In this thesis, we investigate how backstepping can be used for flight control
design to achieve stability over the entire flight envelope. Control laws for a number
of flight control objectives are derived and their properties are investigated, to see
what the possible benefits of using backstepping are. Let us begin by illustrating
the key ideas of the design methodology with a concrete example.
1
Introduction
Y ()
x 10
1.5
1
Y (N)
0.5
rudder, r
VT
0
0.5
1
FT
1.5
2
20
10
0
(deg)
10
20
Figure 1.1 The sideslip, , is in general desired to be kept zero. The aerodynamic side force, Y (), naturally acts to fulfill this objective.
1.1
= r +
(1.1a)
(1.1b)
Pilot
inputs Backstepping
k(x)
control laws
(Ch. 5)
udes
Control
allocation
(Ch. 7)
(Ch. 2)
u
e
Bias
estimation
(Ch. 6)
Aircraft state, x
Figure 1.2 Controller configuration.
The important implication of these discoveries is that since the nonlinear terms
act stabilizing, they need not be cancelled by the controller, and hence complete
knowledge of them is not necessary. This idea of recognizing useful nonlinearities
and benefitting from them rather than cancelling them is the main idea of this
thesis.
In Chapter 5 we use backstepping to design state feedback laws for various flight
control objectives. A common feature of the derived control laws is that they are
linear in the variables used for feedback, considering the angular acceleration of
the aircraft as the input. In our example this corresponds to designing a control
law
u = k(, r) = k1 k2 r
where
u = r = cN (r )
(1.2)
To realize such a control law in terms of the true control input, r , requires
perfect knowledge of the yawing moment, N . Since typically this is not the case,
we can remodel (1.2) as
(r ) + e
u = cN
is our model of the yawing moment and e is the bias to the actual yawing
where N
moment. An intuitively appealing idea is to compute an estimate, e, of the bias,
e, on-line and realize the modified control law
r ) = k(, r) e
cN(
(1.3)
Introduction
which, if the estimate is perfect, cancels the effect of the bias and achieves u =
k(, r) as desired. In Chapter 6, two such estimation schemes are proposed and
shown to give closed loop stability.
The remaining problem is that of control allocation, i.e., how to determine r
such that (1.3) is satisfied. This will be discussed in Chapter 7 where two numerical
solvers are proposed.
The overall control configuration, along with chapter references, is shown in
Figure 1.2.
1.2
Chapter 2 Contains basic facts about modern fighter aircraft such as their dynamics, the control inputs available and what the control objectives are.
Chapter 3 Introduces the backstepping methodology for control design for a class
of nonlinear system. Discusses design choices and contains examples showing
how some nonlinearities can actually be useful and how to benefit from them.
Chapter 4 A short chapter on inverse optimal control, i.e., how one can decide
whether a given control law is optimal w.r.t. a meaningful cost functional.
Chapter 5 Contains the main contributions of the thesis. Backstepping is used
to design state feedback control laws for various flight control objectives.
Chapter 6 Proposes two different methods for adapting to model errors appearing
at the input and investigates closed loop stability in each case.
Chapter 7 Proposes numerical schemes for solving the control allocation problem.
Also presents computer simulations of the designed aircraft control laws in
action.
Chapter 8 Concludes the thesis by evaluating the ability to handle important
issues like stability, tuning, robustness, input saturation, and disturbance
attenuation within the proposed backstepping framework.
1.3
Contributions
1.3 Contributions
The discovery in Section 5.1.3 that the angle of attack control law used by
Snell et al. [71], based on feedback linearization and time-scale separation,
can also be constructed using backstepping and is in fact optimal w.r.t. a
meaningful cost functional.
The adaptive schemes in Chapter 6 for handling model errors appearing at
the control input.
The computer simulations in Section 7.3 showing the proposed control laws
to work satisfactory using realistic aircraft simulation models.
Parts of this thesis have been published previously. The backstepping designs
in Chapter 5 originate from
Ola H
arkeg
ard and S. Torkel Glad. A backstepping design for flight
path angle control. In Proceedings of the 39th Conference on Decision
and Control, pages 35703575, Sydney, Australia, December 2000.
and
Ola H
arkeg
ard and S. Torkel Glad. Flight control design using backstepping. Technical Report LiTH-ISY-R-2323, Department of Electrical
Engineering, Link
opings universitet, SE-581 83 Link
oping, Sweden, December 2000. To be presented at the 5th IFAC Symposium Nonlinear
Control Systems (NOLCOS01), St. Petersburg, Russia.
The results in Chapter 6 on how to adapt to a model error at the input can be
found in
Ola H
arkeg
ard and S. Torkel Glad. Control of systems with input nonlinearities and uncertainties: an adaptive approach. Technical Report
LiTH-ISY-R-2302, Department of Electrical Engineering, Linkopings
universitet, SE-581 83 Link
oping, Sweden, October 2000. Submitted to
the European Control Conference, ECC 2001, Porto, Portugal.
Introduction
2
Aircraft Primer
2.1
The interplay between automatic control and manned flight goes back a long time,
see Stevens and Lewis [74] for a historic overview. At many occasions their paths
have crossed, and progress in one field has provided stimuli to the other.
During the early years of flight technology, the pilot was in direct control of the
aircraft control surfaces. These where mechanically connected to the pilots manual
inputs. In modern aircraft, the pilot inputs are instead fed to a control system.
Based on the pilot inputs and available sensor information, the control system
computes the control surface deflections to be produced. This information is sent
7
Aircraft Primer
Manual inputs
Mission
Pilot
Actuator settings
Control
System
Aircraft behavior
Aircraft
Sensor data
through electrical wires to the actuators located at the control surfaces, which in
turn realize the desired deflections. Figure 2.1 shows the situation at hand. This
is known as fly-by-wire technology. What are the benefits of this approach?
Stability Due to the spectacular 1993 mishap, when a fighter aircraft crashed
over central Stockholm during a flight show, it is a widely known fact, even to people
outside the automatic control community, that many modern aircraft are designed
to be unstable in certain modes. A small disturbance would cause the uncontrolled
aircraft to quickly diverge from its original position. Such a design is motivated by
the fact that it enables faster maneuvering and enhanced performance. However,
it also emphasizes the need for reliable control systems, stabilizing the aircraft for
the pilot.
Varying dynamics The aircraft dynamics vary with altitude and speed. Thus
without a control system assisting him, the pilot himself would have to adjust his
joystick inputs to get the same aircraft response at different altitudes and speeds.
By hiding the true aircraft dynamics inside a control loop, as in Figure 2.1, the
varying dynamics can be made transparent to the pilot by designing the control
system to make the closed loop dynamics independent of altitude and speed.
Aircraft response Using a control system, the aircraft response to the manual
inputs can be selected to fulfill the requirements of the pilots. By adjusting the
control law, the aircraft behavior can be tuned much more easily than having to
adjust the aircraft design itself to achieve, e.g., a certain rise time or overshoot.
Interpretation of pilot inputs By passing the pilot inputs to a control system,
the meaning of the inputs can be altered. In one mode, moving the joystick sideways
may control the roll rate, in another mode, it may control the roll angle. This
nz
VT
Figure 2.2 Pitch control objectives.
paves the way for various autopilot capabilities, e.g., altitude hold, relieving the
pilot workload.
2.2
Control Objectives
Given the possibilities using a flight control system, what does the pilot want to
control?
In a classical dogfight, whose importance is still recognized, maneuverability is
the prime objective. Here, the normal acceleration, nz , or the pitch rate, q, make
up suitable controlled variables in the longitudinal direction, see Figure 2.2. nz ,
also known as the load factor, is the acceleration experienced by the pilot directed
10
Aircraft Primer
yb
xb
zb
ps
xs
VT
xw
Figure 2.3 Lateral control objectives and coordinate systems definitions.
In the figure, and are both positive.
11
Elevons, e
Rudder, r
Leading-edge flaps
Canard wings, c
TVC
namely the body x-axis, xb . Considering a 90 degrees roll, we realize that the initial
angle of attack will turn into pure sideslip at the end of the roll and vice versa.
At high angles of attack this is not tolerable, since the largest acceptable amount
of sideslip during a roll is in the order of 35 degrees [14]. To remove this effect,
we could instead roll about the wind x-axis, xw . Then and remain unchanged
during a roll. This is known as a velocity-vector roll. With the usual assumption
that a roll is performed at zero sideslip, this is equivalent to a stability-axis roll,
performed about the stability x-axis, xs . In this case, the angular velocity ps is the
variable to control.
There also exist situations where other control objectives are of interest. Autopilot functions like altitude, heading, and speed hold are vital to assist the pilot
during long distance flight. For firing on-board weapons, the orientation of the
aircraft is crucial. To benefit from the drag reduction that can be accomplished
during close formation flight, the position of the wingman relative to the leader
must be controlled precisely, preferably automatically to relieve the workload of
the wingman pilot [26]. Also, to automate landing the aircraft it may be of interest
to control its descent through the flight path angle, , see Figure 2.7.
12
2.3
Aircraft Primer
Control Means
To accomplish the control tasks of the previous section, the aircraft must be
equipped with actuators providing ways to control the different motions. Figure
2.4 shows a modern fighter aircraft configuration.
Pitch control, i.e., control of the longitudinal motion, is provided by deflecting
the elevons and the canard wings symmetrically (right and left control surfaces
deflect in the same direction). Conversely, roll control is provided by deflecting the
elevons, and possibly also the canard wings, differentially (right and left control
surfaces deflect in the opposite directions). Therefore, it is natural to introduce
the control inputs
elef t + eright
2
elef t eright
=
2
clef t + cright
=
2
clef t cright
=
2
es =
ed
cs
cd
Yaw control, i.e., control of the rotation about the body z-axis, is provided by the
rudder. The leading-edge flaps can be used, e.g., to minimize the drag.
Recently, the interest in high angle of attack flight has led to the invention of
thrust vectored control (TVC). Deflectable vanes are then mounted at the engine
exhaust so that the engine thrust can be directed to produce a force in some desired
direction. The NASA High Angle-of-Attack Research Vehicle (HARV) [30] uses this
technology.
When convenient, we will let represent all the above control surface deflections.
Finally, the aircraft speed, or rather the engine thrust force, is governed by the
engine throttle setting.
2.4
Aircraft Dynamics
We now turn to the aircraft dynamics, and present the governing equations that tie
the variables to be controlled to the control inputs available to us. The presentation,
based on the books by Stevens and Lewis [74] and Boiffier [7], is focused on arriving
at a model suitable for control design, consisting of a set of first order differential
equations. For a deeper insight into the mechanics and aerodynamics behind the
model, the reader is referred to the aforementioned books or, e.g., Etkin and Reid
[16], McLean [54], or Nelson [57].
2.4.1
13
Governing physics
We will use the assumptions that Earth is flat and fixed, and that the aircraft body
is rigid (as opposed to flexible). This yields a 6 degrees of freedom model (rotation
and translation in 3 dimensions). The dynamics can be described by a state space
model with 12 states consisting of
p = pN p E
nate system;
V= u v
system;
T
T
T
= , the Euler angles describing the orientation of the aircraft
relative to the Earth-fixed coordinate system;
T
= p q r , the angular velocity of the aircraft expressed in the bodyaxes coordinate system.
The task of controlling the aircraft position p is typically left entirely to the pilot,
formation flight being a possible exception. The only coupling from p to the other
state variables is through the altitude dependence of the aerodynamic pressure
(2.4). Since the altitude varies slower than the rest of the variables, it can be
regarded as a constant during the control design. Therefore the position dynamics
will be left out here.
The equations governing the remaining three state vectors can be compactly
written as
+ V)
F = m(V
M = I + I
= E()
force equation
moment equation
(2.1)
(2.2)
attitude equation
(2.3)
where
1 sin tan
cos
E() = 0
0 sin / cos
cos tan
sin
cos / cos
m is the aircraft mass and I is the aircraft inertial matrix. The force and moment
equations follow from applying Newtons second law and the attitude equation
spurs from the relation between the Earth-fixed and the body-fixed coordinate
systems.
F and M represent the sum of the forces and moments, respectively, acting on
the aircraft at the center of gravity. These forces and moments spring from three
major sources, namely
14
Aircraft Primer
gravity,
engine thrust, and
aerodynamic efforts.
Introducing
F = FG + FE + FA
M = ME + MA
we will now investigate each of these components and express them in the bodyfixed coordinate system.
Gravity
Gravity only gives a force contribution since it acts at the aircraft center of gravity. The gravitational force, mg, directed along the normal of the Earth plane, is
considered constant over the altitude envelope. This yields
sin
FG = mg sin cos
cos cos
Engine thrust
The thrust force produced by the engine is denoted by FT . Assuming the engine
to be positioned so that the thrust acts parallel to the aircraft body x-axis (not
using TVC) yields
FT
FE = 0
0
Also assuming the engine to be mounted so that the thrust point lies in the bodyaxes xz-plane, offset from the center of gravity by ZT P in the body-axes z-direction
results in
0
ME = FT ZT P
0
Aerodynamic efforts
The aerodynamic forces and moments, or the aerodynamic efforts for short, are
due to the interaction between the aircraft body and the incoming airflow. The
size and direction of the aerodynamic efforts are determined by the amount of air
diverted by the aircraft in different directions (see [3] for an enlightening discussion
on various explanations to aerodynamic lift). The amount of air diverted by the
aircraft is mainly decided by
15
1
(h)VT2
2
(2.4)
captures the density dependence and most of the speed dependence, S is the aircraft
wing area, and l refers to the length of the lever arm connected to the moment.
CF and CM are known as aerodynamic coefficients. These are difficult to model
analytically but can be estimated empirically through wind tunnel experiments
and actual flight tests. Typically, each coefficient is written as the sum of several
components, each capturing the dependence of one or more of the variables above.
These components can be represented in several ways. A common approach is to
store them in look-up tables and use interpolation to compute intermediate values.
In other approaches one tries to fit the data to some parameterized function.
In the body-axes coordinate system, we have the expressions
X
X = qSCx
FA = Y where
Y = qSCy
Z = qSCz
Z
= qSbCl
L
rolling moment
L
pitching moment
cCm
MA = M where M = qS
yawing moment
N = qSbCn
N
These are illustrated in Figure 2.5. The aerodynamic forces are also commonly
expressed in the wind-axes coordinate system (related to the body-fixed coordinate
system as indicated in Figure 2.3) where we have that
D
drag force
D = qSCD
side force
Y = qSCY
(2.5)
FA,w = Y where
lift force
L = qSCL
L
where the lift and side force coefficient, CL and CY , mainly depend on and
respectively.
16
Aircraft Primer
Y
M
L
Z
Essentially, only the aerodynamic moments are affected when a control surface
is deflected. This is a key feature without which some nonlinear control design
methods, including backstepping and dynamic inversion, would not be applicable.
Figure 2.6 shows the lift force and pitching moment coefficients, CL and Cm , as
functions of angle of attack and symmetrical elevon deflection. The aerodata comes
from the HIRM model [56].
In Section 2.2, the normal acceleration nz was introduced. We now have the
setup to define nz more precisely and find its relationship to . We have that
nz =
qSCz (, , , . . . )
Z
=
mg
mg
Given an nz command, the above equation can be used to solve for a corresponding
command.
2.4.2
We will now collect the equations from the previous section and write the result in a
form suitable for control design, namely as a system of first order scalar differential
17
0.4
1.5
0.2
1
Cm
CL
0.5
0
0.2
0.5
1
40
30
20
10
0.4
40
30
20
10
0
es (deg) 10 10 0
10
20
30
40
50
(deg)
0
es (deg) 10 10 0
10
20
30
50
40
(deg)
0.5
0.4
1.5
0.3
1
0.2
0.1
Cm
0.5
0
0.1
0.5
0.2
1
0.3
1.5
2
10
0.4
0
10
20
30
(deg)
40
50
0.5
10
10
20
30
(deg)
40
50
18
Aircraft Primer
'
Force equations (body-axes)
1
u = rv qw g sin + (X + FT )
m
1
v = pw ru + g sin cos + Y
m
1
w = qu pv + g cos cos + Z
m
(2.6a)
q = c5 pr c6 (p r ) + c7 (M + FT ZT P )
+ c9 N
r = (c8 p c2 r)q + c4 L
(2.6b)
(2.6c)
&
c4 = Ixz
c7 =
1
Iy
c2 = (Ix Iy + Iz )Ixz
Iz Ix
c5 =
Iy
c3 = Iz
Ixz
c6 =
Iy
2
c8 = Ix (Ix Iy ) + Ixz
c9 = Ix
where
Ix
I= 0
Ixz
0
Iy
0
Ixz
0 ,
Iz
2
= Ix Iz Ixz
19
VT =
This gives us
'
(2.7a)
(2.7b)
(2.7c)
%
(2.8)
g3 = g(cos cos sin + sin cos sin sin sin cos cos )
See Appendix 2.A for a complete derivation. A pleasant effect of this reformulation
is that the nonlinear aerodynamic forces L and Y mainly depend on and ,
respectively. This fact will be used for control design using backstepping.
A very common approach to flight control design is to control longitudinal
motions (motions in the body xz-plane) and lateral motions (all other motions)
separately. With no lateral motions, the longitudinal equations of motion become
'
$
&
Longitudinal equations
1
V T = (D + FT cos mg sin )
m
1
(L FT sin + mg cos )
= q +
mVT
= q
1
q = (M + FT ZT P )
Iy
(2.9a)
(2.9b)
(2.9c)
(2.9d)
%
Here, = is the flight path angle determining the direction of the velocity
vector, as depicted in Figure 2.7.
20
Aircraft Primer
2.5
In this section we survey some of the proposed design schemes with the emphasis
on nonlinear control designs. Flight control design surveys can also be found in
Magni et al. [53] and Huang and Knowles [35].
2.5.1
Gain-scheduling
21
+ Using several linear models to describe the aircraft dynamics allows the control designer to utilize all the classical tools for control design and robustness
and disturbance analysis.
+ The methodology has proven to work well in practice. The Swedish fighter
JAS 39 Gripen [63] is a flying proof of its success.
The outlined divide-and-conquer approach is rather tedious since for each
region, a controller must be designed. The number of regions may be over
50.
Only the nonlinear system behavior in speed and altitude is considered. Stability is therefore guaranteed only for low angles of attack and low angular
rates.
2.5.2
The idea behind gain-scheduling was to provide the pilot with a similar aircraft
response irrespectively of the aircraft speed and altitude. This philosophy is even
more pronounced in nonlinear dynamic inversion (NDI), which is the term used
in the aircraft community for what is known as feedback linearization in control
theory. In this thesis, we only deal with feedback linearization through examples
and intuitive explanations. For an introduction to feedback linearization theory,
the reader is referred to, e.g., Slotine and Li [70] or Isidori [36].
Using dynamic inversion, as the name implies, the natural aircraft dynamics are
inverted and replaced by the desired linear ones through the wonders of feedback.
This includes the nonlinear behavior in speed and altitude as well as the nonlinear
effects at high angles of attack and high angular rates.
To make things more concrete, consider the simplified angle of attack dynamics
(cf. (2.9b), (2.9d))
= q
q =
1
L()
mVT
1
M (, , q)
Iy
(2.10a)
(2.10b)
The speed is assumed to vary much slower than and q so that V T 0 is a good
approximation. Now, introduce
z=q
1
L()
mVT
and compute z:
z =
1
1
M (, , q)
L0 ()z
Iy
mVT
(2.11)
22
Aircraft Primer
ref
u
k0
NDI
(2.12)
, q
Aircraft
Choord.
change
, z
(k1 k2 )
Figure 2.8 Illustration of a dynamic inversion control law. The inner feedback loop cancels the nonlinear dynamics making the dashed
box a linear system, which is controlled by the outer feedback
loop.
Introducing
u=
1
1
M (, , q)
L0 ()z
Iy
mVT
(2.12)
23
Robustness analysis has often been pointed out as the Achilles heel of dynamic
inversion. Dynamic inversion relies on the complete knowledge of the nonlinear
plant dynamics. This includes knowledge of the aerodynamic efforts, which in
practice comes with an uncertainty in the order of 10%. What happens if the true
nonlinearities are not completely cancelled by the controller? [78] contains some
results regarding this issue.
One way of enhancing the robustness is to reduce the control law dependence on
the aerodynamic coefficients. Note that computing from (2.12) requires knowledge of the aerodynamic coefficients Cm and CL as well as dCL /d (recall from
(2.5) that L = qSCL ). Snell et al. [71] propose a dynamic inversion design which
does not involve dCL /d, thus making the design more robust. The idea is to use
time-scale separation and design separate controllers for the - and q-subsystems
of (2.10). Inspired by singular perturbation theory [42] and cascaded control design, the system is considered to consist of slow dynamics (2.10a) and fast dynamics
(2.10b). First, the slow dynamics are controlled. Assume the desired slow dynamics
to be
= k1 ( ref ),
k1 > 0
Then the angle of attack command ref can be mapped onto a pitch rate command
q ref = k1 ( ref ) +
1
L()
mVT
(2.14)
We now turn to the fast dynamics and determine a control law rendering the fast
dynamics
q = k2 (q q ref ),
k2 > 0
(2.15)
24
Aircraft Primer
ref
Outer ctl.
(2.14)
q ref
Inner ctl.
(2.15)
q = . . .
(2.10b)
= . . .
(2.10a)
2.5.3
In addition to dynamic inversion, many other nonlinear approaches have been applied to flight control design. Garrard et al. [23] formulate the angle of attack control problem as a linear quadratic optimization problem. As their aircraft model
is nonlinear, in order to capture the behavior at high angles of attack, the arising
Hamilton-Jacobi-Bellman equation is difficult to solve exactly. The authors settle
for a truncated solution to the HJB equation.
Mudge and Patton [55] consider the problem of pitch pointing, where the objective is to command the pitch angle while the flight path angle remains
unchanged. Eigenstructure assignment is used to achieve the desired decoupling
and sliding mode behavior is added for enhanced robustness.
Other approaches deal with the problem of tracking a reference signal, whose
future values are also known. Levine [50] shows that an aircraft is differentially flat
if the outputs are properly chosen. This is used to design an autopilot for making
the aircraft follow a given trajectory.
Hauser and Jadbabaie [31] design receding horizon control laws for unmanned
combat aerial vehicles performing aggressive maneuvers. Over a receding horizon,
the aircraft trajectory following properties are optimized on-line. The control laws
are implemented and evaluated using the ducted fan at Caltech.
Appendix
2.A
This appendix contains the details of the conversion of the aircraft force equation
from the body-axes to the wind-axes coordinate system. The result is a standard
one, but the derivation, which establishes the relationship between the forces used
in the two different representations, is rarely found in textbooks on flight control.
The body-axes force equations are
1
(X + FT )
m
1
v = pw ru + g sin cos + Y
m
1
w = qu pv + g cos cos + Z
m
u = rv qw g sin +
The relation between the variables used in the two coordinate systems is given by
w
u
v
= arcsin
VT
p
VT = u2 + v 2 + w2
= arctan
u = VT cos cos
v = VT sin
w = VT sin cos
Differentiating we get
uu + v v + ww
1
1
V T =
=
u(rv qw g sin + (X + FT ))
VT
VT
m
1
1
+ v(pw ru + g sin cos + Y ) + w(qu pv + g cos cos + Z)
m
m
= g( cos cos sin + sin sin cos + sin cos cos cos )
|
{z
}
g1
1
+ (FT cos cos + X cos cos + Y sin + Z sin cos )
|
{z
}
m
D
25
26
Aircraft Primer
uw wu
VT cos (w cos u sin )
=
2
2
u +w
VT2 cos2
1
1
(qVT cos cos pVT sin + g cos cos + Z) cos
=
VT cos
m
1
(rVT sin qVT sin cos g sin + (X + FT )) sin
m
= q tan (p cos + r sin )
1
(m g(cos cos cos sin sin ) FT sin + Z cos X sin )
+
{z
}
|
|
{z
}
mVT cos
g2
v(u
2 + w2 ) v(uu + ww)
vV
T v V T
=
=
2
3
VT cos
VT cos
vV
T2 cos2 VT2 sin cos (u cos + w sin )
VT3 cos
1
1
((pw ru + g sin cos + Y ) cos
=
VT
m
1
(rv qw g sin + (X + FT )) cos sin
m
1
(qu pv + g cos cos + Z) sin sin )
m
= p(sin cos2 + sin sin2 ) r(cos cos2 + cos sin2 )
|
|
{z
}
{z
}
=
cos
sin
1
(m g(cos sin cos + cos sin sin sin sin cos cos )
+
|
{z
}
mVT
g3
1
(L FT sin + mg2 )
mVT cos
1
(Y FT cos sin + mg3 )
mVT
27
The relationship between the aerodynamic forces expressed in the two coordinate
systems is given by
D = X cos cos Y sin Z sin cos
L = X sin Z cos
Y = X cos sin + Y cos Z sin sin
These equations are relevant since often the available aerodata relates to the bodyaxes system while it is the wind-axes forces that appear in the control laws.
28
Aircraft Primer
3
Backstepping
Lyapunov theory has for a long time been an important tool in linear as well as
nonlinear control. However, its use within nonlinear control has been hampered by
the difficulties to find a Lyapunov function for a given system. If one can be found,
the system is known to be stable, but the task of finding such a function has often
been left to the imagination and experience of the designer.
The invention of constructive tools for nonlinear control design based on Lyapunov theory, like backstepping and forwarding, has therefore been received with
open arms by the control community. Here, a control law stabilizing the system is
derived along with a Lyapunov function to prove the stability.
In this chapter, backstepping is presented with the focus on designing state
feedback laws. Sections 3.1 and 3.2 contain mathematical preliminaries. Section
3.3 is the core of the chapter where the main backstepping result is presented along
with a discussion on the class of systems to which it applies and which choices
are left to the designer. In Section 3.4, some related design methods based on
Lyapunov theory are outlined, and in Section 3.5 we survey applications to which
backstepping has been applied.
29
30
Backstepping
3.1
Lyapunov Theory
(3.1)
31
Definition 3.2
A scalar function V (x) is said to be
positive definite if V (0) = 0 and
V (x) > 0,
x 6= 0
x 6= 0
For showing stability when V (x) is only negative semidefinite, the following
corollary due to LaSalle is useful.
Corollary 3.1
Let x = 0 be the only equilibrium point for (3.1). Let V (x) be a scalar, continously
differentiable function of the state x such that
V (x) is positive definite
V (x) is radially unbounded
32
Backstepping
Note that these results are non-constructive, in the sense that they give no clue
about how to find a V satisfying the conditions necessary to conclude GAS. We
will refer to a function V (x) satisfying the itemized conditions in Theorem 3.1 as
a Lyapunov function for the system.
3.2
(3.2)
Given the stability results from the previous section, it would be nice if we could
find a control law
u = k(x)
so that the desired state of the closed loop system
x = f (x, k(x))
becomes a globally asymptotically stable equilibrium point. For simplicity, we will
assume the origin to be the goal state. This can always be achieved through a
suitable change of coordinates.
A straightforward approach to finding k(x) is to construct a positive definite,
radially unbounded function V (x) and then choose k(x) such that
V = Vx (x)f (x, k(x)) = W (x)
(3.3)
where W (x) is positive definite. Then closed loop stability follows from Theorem
3.1. For this approach to succeed, V and W must have been carefully selected, or
(3.3) will not be solvable. This motivates the following definition:
Definition 3.3 (Control Lyapunov function (clf ))
A smooth, positive definite, radially unbounded function V (x) is called a control
Lyapunov function (clf) for (3.2) if for all x 6= 0,
V = Vx (x)f (x, u) < 0 for some u
3.3 Backstepping
33
Given a clf for the system, we can thus find a globally stabilizing control law. In
fact, the existence of a globally stabilizing control law is equivalent to the existence
of a clf. This means that for each globally stabilizing control law, a corresponding
clf can be found, and vice versa. This is known as Artsteins theorem [4].
If a clf is known, a particular choice of k(x) is given by Sontags formula [72]
reproduced in (3.5). For a system which is affine in the control input
x = f (x) + g(x)u
we can select
u = k(x) =
a+
a2 + b 2
b
(3.4)
(3.5)
where
a = Vx (x)f (x)
b = Vx (x)g(x)
This yields
a+
V = Vx (x) f (x) + g(x)u = a + b
p
a2 + b 2
= a2 + b 2
b
(3.6)
3.3
Backstepping
The control designs of the previous section rely on the knowledge of a control
Lyapunov function for the system. But how do we find such a function?
Backstepping answers this question in a recursive manner for a certain class
of nonlinear systems which show a lower triangular structure. We will first state
the main result and then deal with user related issues like which systems can be
handled using backstepping, which design choices there are and how they affect the
resulting control law.
34
Backstepping
3.3.1
Main result
This result is standard today and can be found in, e.g, Sepulchre et al. [66] or
Krstic et al. [46].
Proposition 3.1 (Backstepping)
Consider the system
x = f (x, )
= u
(3.7a)
(3.7b)
(3.8)
is known such that 0 is a GAS equilibrium of (3.7a). Let W (x) be a clf for (3.7a)
such that
|=des = Wx (x)f (x, des (x)) < 0, x 6= 0
W
Then, a clf for the augmented system (3.7) is given by
V (x, ) = W (x) +
2
1
des (x)
2
(3.9)
des (x)
f (x, ) f (x, des (x))
f (x, ) Wx (x)
+ des (x)
x
des (x)
(3.10)
Before presenting the proof, it is worth pointing out that (3.10) is neither the
only nor necessarily the best globally stabilizing control law for (3.7). The value
of the proposition is that it shows the existence of at least one globally stabilizing
control law for this type of augmented systems.
Proof We will conduct the proof in a constructive manner to clarify which design choices
that can be made during the control law construction.
The key idea is to use the fact that we know how to stabilize the subsystem (3.7a) if we
were able to control directly, namely by using (3.8). Therefore, introduce the residual
= des (x)
By forcing to zero, will tend to the desired value des and the entire system will be
stabilized.
3.3 Backstepping
35
= u
(3.11a)
des
(x)
f (x, + des (x))
x
(3.11b)
where
=
(x, )
In (3.11a) we have separated the desired dynamics from the dynamics due to =
6 0.
To find a clf for the augmented system it is natural to take the clf for the subsystem, W ,
Let us select
and add a term penalizing the residual .
1 2
2
and find a globally stabilizing control law by making V negative definite.
h
i
h
i
des
+ u (x) f (x, + des (x))
V = Wx (x) f (x, des (x)) + (x, )
x
h
i
des
des
+ u (x) f (x, + des (x))
= Wx (x)f (x, (x)) + Wx (x)(x, )
x
= W (x) +
V (x, )
(3.12)
The first term is negative definite according to the assumptions. The second part, and
thereby V , can be made negative definite by choosing
des
+ (x) f (x, + des (x))
u = Wx (x)(x, )
x
This yields
V = Wx (x)f (x, des (x)) 2
which proves the sought global asymptotic stability.
The key idea in backstepping is to let certain states act as virtual controls
of others. The same idea can be found in cascaded control design and singular
perturbation theory [42].
The origin of backstepping is not quite clear due to its simultaneous and often
implicit appearance in several papers in the late 1980s. However, it is fair to say
that backstepping has been brought into the spotlight to a great extent thanks to
the work of Professor Petar V. Kokotovic and his coworkers.
The 1991 Bode lecture at the IEEE CDC, held by Kokotovic [43], was devoted
to the evolving subject and in 1992, Kanellakopoulos et al. [39] presented a mathematical toolkit for designing control laws for various nonlinear systems using
backstepping. During the following years, the textbooks by Krstic et al. [46], Freeman and Kokotovic [21], and Sepulchre et al. [65] were published. The progress of
backstepping and other nonlinear control tools during the 1990s were surveyed by
Kokotovic [41] at the 1999 IFAC World Congress in Beijing.
Let us now deal with some issues related to practical control design using backstepping.
36
3.3.2
Backstepping
Input nonlinearities
An immediate extension of Proposition 3.1 is to allow for an input mapping to be
present in (3.7b):
= g(x, , u) , u
u
can now be selected according to (3.10) whereafter u can be found given that
g(x, , u) = u
(3.13)
Then (3.10) becomes a virtual control law, which along with the clf (3.9) can be
used to find a globally stabilizing control law for (3.7) augmented by (3.13).
Now, either v is yet another state variable, in which case the backstepping
procedure is repeated once again, or v is indeed the control input, in which case
we are done.
Thus, by recursively applying backstepping, globally stabilizing control laws
can be constructed for systems of the following lower triangular form, known as
pure-feedback form systems:
x = f (x, 1 )
1 = g1 (x, 1 , 2 )
..
.
i = gi (x, 1 , . . . , i , i+1 )
(3.14)
..
.
m = gm (x, 1 , . . . , m , u)
For the design to succeed, a globally stabilizing virtual control law, 1 = 1des (x),
along with a clf, must be known for the x subsystem. In addition, gi , i = 1, . . . , m
1 must be invertible w.r.t. i+1 and gm must be invertible w.r.t. u.
3.3 Backstepping
37
Systems for which the new variables enter in an affine way, are known as
strict-feedback form systems:
x = a(x) + b(x)1
1 = a1 (x, 1 ) + b1 (x, 1 )2
..
.
i = ai (x, 1 , . . . , i ) + bi (x, 1 , . . . , i )i+1
..
.
m = am (x, 1 , . . . , m ) + bm (x, 1 , . . . , m )u
Strict-feedback form systems are nice to deal with and often used for deriving
results related to backstepping. Firstly, the invertability condition imposed above
is satisfied given that bi 6= 0. Secondly, if (3.7a) is affine in , the control law (3.10)
reduces to
u=
des (x)
(a(x) + b(x)) Wx (x)b(x) + des (x)
x
Dynamic backstepping
Even for certain systems which do not fit into a lower triangular feedback form,
there exist backstepping designs. Fontaine and Kokotovic [18] consider a two dimensional system where both states are affected by the control input:
x 1 = (x1 ) + x2 + (u)
x 2 = u
Their approach is to first design a globally stabilizing virtual control law for the
x1 -subsystem, considering = x2 + (u) as the input. Then backstepping is used
to convert this virtual control law into a realizable one in terms of u. Their design
results in a dynamic control law, and hence the term dynamic backstepping is used.
3.3.3
The derivation of the backstepping control law (3.10) leaves a lot of room for
variations. Let us now exploit some of these.
Dealing with nonlinearities
A trademark of backstepping is that is allows us to benefit from useful nonlinearities, naturally stabilizing the system. This can be done by choosing the virtual
control laws properly. The following example demonstrates this fundamental difference to feedback linearization.
38
Backstepping
x = x
x 0
6
2
x
Figure 3.1 The dynamics of the uncontrolled system x = x3 + x. The
linear term acts destabilizing around the origin.
(3.15)
A clf is given by
W =
1 2
x
2
(3.16)
3.3 Backstepping
39
which yields
= x(x3 + x + u) = x4
W
proving the origin to be GAS according to Theorem 3.1.
Applying feedback linearization would have rendered the control law
u = x3 kx,
k>1
(3.17)
Obviously, this control law does not recognize the beneficial cubic nonlinearity
but counteracts it, thus wasting control effort. Also, the feedback linearizing
design is dangerous from a robustness perspective what if the true system
dynamics are x = 0.9x3 + x + u and (3.17) is applied...
Weighting the clf
When constructing the combined clf (3.9), we can choose any weighted sum of the
two terms,
1
V = cW + ( des )2 , c > 0
2
In our designs, we will use the weight c to cancel certain terms in Equation (3.12).
A technical hint is to put the weight on W since it yields nicer expressions.
Non-quadratic clf
Although quadratic clf:s are frequently used in backstepping, they do not always
make up the best choice as the following example demonstrates.
1 2
x
2 1
40
Backstepping
Introducing
x
2 = x2 xdes
2 (x1 ) = x2 + x1
we can rewrite the system as
2
x 1 = x31 + x
x2 = u x31 + x
2
Following the proof of Proposition 3.1, we add a quadratic term to W , to
penalize the deviation from the suggested virtual control law:
V (x1 , x2 ) = W (x1 ) +
2
1
1
1 2
x2 xdes
= x21 + x
2 (x1 )
2
2
2 2
V (x1 , x2 ) = W (x1 ) + x
2 2
and compute its time derivative.
2 ) + x
2 (u x31 + x
2 )
V = W 0 (x1 )(x31 + x
2 (W 0 (x1 ) + u x31 + x
2 )
= W 0 (x1 )x31 + x
We now use our extended design freedom and select a W so that the indefinite
mixed terms cancel each other. This is satisfied by
W 0 (x1 ) = x31 ,
W (x1 ) =
1 4
x
4 1
3.3 Backstepping
41
2
1
() des (x)
2
i
g(x, ) g(x, des (x))
1 h des (x)
g(x, ) Wx (x)
+ des (x)
des
x
(x)
(3.18)
0 ()
Example 3.3
Consider the system
x 1 = x31 + x52 + x2
x 2 = u
42
Backstepping
i
1 h
6x21 (x31 + x52 + x2 ) x1 2x31 x52 x2
+1
5x42
In [2], this technique was used for speed control of a switched reluctance motor
where it was convenient to formulate the virtual control law in terms of the square
current i2 .
Optimal backstepping
In linear control, one often seeks control laws that are optimal in some sense, due
to their ability to suppress external disturbances and to function despite model
errors, as in the case of H and linear quadratic control [79].
It is therefore natural that efforts have been made to extend these designs to
nonlinear control. The difficulty lies in the Hamilton-Jacobi-Bellman equation that
needs to be solved in order to find the control law.
A way to surpass this problem is to only require the desired optimality to hold
locally around the origin, where the system can be approximated by its linearization. In the global perspective, one settles for optimality according to some cost
functional that the designer cannot rule over precisely. This is known as inverse
optimality, which is the topic of Chapter 4.
Contributions along this line can be found for strict-feedback form systems.
Ezal et al. [17] use backstepping to construct controllers which are locally H optimal. L
ofberg [51] designs backstepping controllers which are locally optimal
according to a linear quadratic performance index.
One advantage of using an optimality based approach is that the designer then
specifies an optimality criterion rather than virtual control laws and the clf:s themselves. This enhances the resemblance with linear control.
3.4
3.4.1
43
Forwarding
The backstepping philosophy applies to systems of the form (3.7). Another class
of nonlinear systems for which one can also construct globally stabilizing control
laws are those that can be written
x = f (x, u)
= g(x, u)
(3.19a)
(3.19b)
A clf and a globally stabilizing control law for the x-subsystem (3.19a) are assumed
to be known. The question is how to augment this control law to also stabilize the
integrator state in (3.19b). This problem, which can be seen as a dual to the one
in backstepping, can be solved using so called forwarding [67].
By combining feedback (3.7) and feedforward (3.19) systems, interlaced systems
can be constructed. Using backstepping in combination with forwarding, such
systems can also be systematically stabilized [66].
3.4.2
So far we have only considered the case where all the state variables are available
for feedback and where the model is completely known. For the non-ideal cases
where this is not true, there are other flavors of backstepping to resort to.
For systems with parametric uncertainties, there exists adaptive backstepping
[46]. Here, a parameter estimate update law is designed such that closed loop
stability is guaranteed when the parameter estimate is used by the controller. In
Section 6.3 we will see how this technique can be used to estimate and cancel
unknown additive disturbances on the control input.
Robust backstepping [21] designs exist for systems with imperfect model information. Here, the idea is to select a control law such that a Lyapunov function
decreases for all systems comprised by the given model uncertainty.
In cases where not all the state variables can be measured, the need for observers
arises. The separation principle valid for linear systems does not hold for nonlinear
systems in general. Therefore, care must be taken when designing the feedback law
based on the state estimates. This is the topic of observer backstepping [39, 46].
3.5
Applications of Backstepping
Although backstepping theory has a rather short history, numerous practical applications can be found in the literature. This fact indicates that the need for
a nonlinear design methodology handling a number of practical problems, as discussed in the previous section, has existed for a long time. We now survey some
publications regarding applied backstepping. This survey is by no means complete, but is intended to show the broad spectrum of engineering disciplines in
which backstepping has been used.
44
Backstepping
Backstepping designs can be found for a wide variety of electrical motors [2, 10,
11, 33, 34]. Turbocharged diesel engines are considered in [20, 37] while jet engines
are the subject of [45].
In [25, 75], backstepping is used for automatic ship positioning. In [75], the
controller is made locally H -optimal based on results in [17].
Robotics is another field where backstepping designs can be found. Tracking
control in considered in [38] and [9] where the latter is a survey of approaches valid
for various assumptions regarding the knowledge of the model.
There also exist a few papers, combining flight control and backstepping. [68]
treats formation flight control of unmanned aerial vehicles. [69] and [73] use backstepping to design flight control laws which are adaptive to changes in the aerodynamic forces and moments due to, e.g., actuator failures. Also, the Lyapunov
functions used contain a term penalizing the integral of the tracking error, enhancing the robustness.
4
Inverse Optimal Control
This chapter is preparatory for the upcoming control designs. The tools we develop
in this chapter will be used to show that the control laws derived in Chapter 5 each
minimize a certain cost functional. This route of first deriving a control law and
then determining which cost it minimizes, and thus in which sense it is optimal, is
known as inverse optimal control.
The material in this chapter will be presented in a rather intuitive manner.
A mathematically strict treatment of the subject can be found in, e.g., Sepulchre
et al. [65]. In Section 4.1 the general infinite horizon optimal control problem
is introduced. In Section 4.2, systems which are affine in the control input are
considered, and some standard inverse results are derived for cost functionals which
are quadratic in the input. Finally, the well known gain margin result of optimal
control is shown in Section 4.3.
4.1
Optimal Control
A general idea within control design is to select a control law which is optimal in
some sense. Given a dynamic system
x = f (x, u)
45
46
where x Rn is the state vector and u Rm is the control input, we seek the
control law u(x) that minimizes the cost functional
Z
J=
L(x, u)dt
0
Clearly, when the optimal control law is used, L and V coincide. This motivates
the Hamilton-Jacobi-Bellman (HJB) equation
(4.1)
0 = min L(x, u) + Vx (x)f (x, u)
u
for finding the optimal control law u along with a Lyapunov function V (x) for the
controlled system.
4.2
It is well known that in general, it is not feasible to solve the HJB equation (4.1).
We therefore restrict our discussion to dynamic systems of the form
x = f (x) + g(x)u
(4.2)
For these systems, the HJB equation is greatly simplified if L is chosen quadratic
in u according to
L(x, u) = q(x) + uT R(x)u
where q(x) > 0, x 6= 0 and R(x) is symmetric matrix, positive definite for all x.
Inserting this into (4.1) yields
(4.3)
0 = min q(x) + uT R(x)u + Vx (x)(f (x) + g(x)u)
u
The equation is solved in two steps. First we find the minimizing u, and then we
solve for equality to zero. The minimization can be done by completion of squares:
q + uT Ru + Vx f + Vx gu =
T
1
1
1
q + Vx f + u + R1 (Vx g)T R u + R1 (Vx g)T Vx gR1 (Vx g)T
2
2
4
47
The control input u only appears in the square, positive definite term. The
minimum therefore occurs when this term is set to zero, which is achieved by
1
u = k(x) = R1 (Vx g)T
2
(4.4)
What remains is to insert this control law into (4.3). This gives us
1
0 = q + Vx f Vx gR1 (Vx g)T
4
(4.5)
Equations (4.4) and (4.5) provide the connection between the cost functional,
given by q(x) and R(x), and the optimal control strategy, in terms of k(x) and
V (x). As for the general problem in the previous section, it is in general not
feasible to solve for k(x) and V (x) given q(x) and R(x) of the designers choice.
However, we see that the reverse task is simpler. Given a control law k(x) and a clf
V (x) (corresponding to a Lyapunov function for the controlled system), q(x) and
R(x), determining the cost functional that is minimized, can be found by solving
1 1
R (x)(Vx (x)g(x))T
2
1
q(x) = Vx (x)f (x) + Vx (x)g(x)k(x)
2
k(x) =
(4.6)
(4.7)
Vx (x)g(x)
2k(x)
(4.8)
0
g(x) =
1
In Example 3.2,
u = k(x) = 3(x1 + x2 )
was shown to be globally stabilizing using the clf
V (x) =
1 4 1
x + (x1 + x2 )2
4 1 2
48
which satisfies
Vx (x) = x31 + x1 + x2
x1 + x2
x1 + x2
1
=
2 3(x1 + x2 )
6
1
q(x) = (x31 + x1 + x2 )(x31 + x1 + x2 ) + (x1 + x2 ) 3(x1 + x2 )
2
1
6
2
= x1 + (x1 + x2 )
2
Thus, the suggested control law minimizes the cost functional
Z
1
1
J=
(x61 + (x1 + x2 )2 + u2 )dt
2
6
0
4.3
Besides optimal control being an intuitively appealing approach, the resulting control laws inherently possess certain robustness properties [24]. One important property regards the gain margin.
Assume that the prescribed optimal control input (4.4) cannot be produced
exactly, but that the actual control input is
u = (x)u
(4.9)
where (x) > 0 is a scalar, see Figure 4.1. Actuator saturation, for example, can
be modeled as gain reduction, (x) < 1. Are optimal controllers robust to such
changes in the gain? The control law (4.9) is globally stabilizing provided that
V = Vx f + Vx gu = Vx f + Vx gu
is negative definite. From the assumptions and (4.7) we know that
1
q = Vx f + Vx gu
2
is negative definite. Combining these two equations yields
1 1
1
V = q + ( )Vx gu = q(x) ((x) ) Vx (x)g(x)R1 (x)(Vx (x)g(x))T
2
2 |2
{z
}
positive (semi-)definite
Apparently, V is negative definite (at least) for all (x) 12 . Thus, all state
feedback control laws which solve an optimal control problem of the type considered
in Section 4.2, have a gain margin of [ 12 , ].
Note that the actual tolerable gain reduction may be more than 50%. In Example 3.2, any control law u = k
x2 where k > 1 makes V negative definite and
hence is globally stabilizing. The selected control law u = 3
x2 thus has a gain
margin of ] 13 , ].
u = k(x)
(x)
49
x = f (x) + g(x)u
Figure 4.1 The optimal control law u = k(x) remains globally stabilizing for any gain perturbation (x) 12 .
50
5
Backstepping Designs for
Flight Control
In the previous chapters we have introduced the aircraft dynamics, the backstepping design procedure and inverse optimality tools for evaluating state feedback
laws. We now have the toolbox we need to live up to the title of the thesis, and do
flight control design using backstepping. This chapter is the core of the thesis.
We will design flight control laws for two different objectives for general maneuvering (control of , , and ps ) and for flight path angle control (control of ).
The two presentations in Sections 5.1 and 5.2 follow the same outline. First, the
relevant dynamics from Chapter 2 are reproduced and the assumptions needed for
making the control design feasible are stated. The flight control problem of interest is then viewed as a more general nonlinear control problem and backstepping is
used to derive globally stabilizing control laws, whose properties are investigated.
We finally return to the flight control context and investigate which practical consequences applying the derived control laws leads to.
5.1
General Maneuvering
52
5.1.1
1
(L FT sin + mg2 )
mVT cos
1
(Y FT cos sin + mg3 )
mVT
M = I + I
(5.1a)
(5.1b)
(5.1c)
where
g2 = g(cos cos cos + sin sin )
g3 = g(cos cos sin + sin cos sin sin sin cos cos )
For backstepping to be applicable, the system dynamics must comply with the
pure-feedback form (3.14). We therefore need the following assumption:
A1. Control surface deflections only produce aerodynamic moments, and not
forces. Also neglecting the dependence on the angular rates, the aerodynamic force coefficients can be written
lift force coefficient: CL ()
side force coefficient: CY ()
whose characteristics are shown in Figure 5.1.
53
1.5
0.2
0.1
CY
CL
0.5
0.1
0.5
0.2
1
10
10
20
30
(deg)
40
50
30
20
10
0
10
(deg)
20
30
Figure 5.1 Typical lift force coefficient vs. angle of attack and side force
coefficient vs. sideslip characteristics.
qs
rs
T
T
54
s = Rsb ,
Rsb
cos
= 0
sin
0 sin
1
0
0 cos
(5.2)
1
T
Note that the transformation matrix Rsb satisfies Rsb
= Rsb
.
Inspecting the dynamics (5.1), we see that it is more convenient to work with
s rather than . Introducing
T
u = u1 u2 u3 = s
(5.3a)
1
(L() FT sin + mg2 )
= qs ps tan +
mVT cos
qs = u2
1
(Y () FT cos sin + mg3 )
= rs +
mVT
rs = u3
(5.3b)
(5.3c)
(5.3d)
(5.3e)
These are the dynamics we will use for the control design.
The relationship between u, which will be considered as the control input during
the control design, and the actual control input, , can be found by combining (5.1c)
and (5.2). Under assumption A3 above, , and thereby also Rsb , are considered
constant while realizing u1 and u3 which relate to lateral control. This yields
u = s = Rsb = Rsb I 1 (M I)
(5.4)
Solving for the net moment, M, which depends on the control surface deflections,
, we get
T
M() = IRsb
u + I
(5.5)
We will postpone the discussion on how to practically solve for given u until
Chapter 7.
5.1.2
(5.6)
55
General system
(5.6)
R
w1
w2
u
y
f (w1 , y)
dynamics
(5.3b)(5.3c)
ref
qs
u2
ps , , VT , h, ,
f (, y )
dynamics
(5.3d)(5.3e)
0
rs
u3
, VT , h, ,
f (, y )
Table 5.1 The relationships between the general nonlinear system (5.6)
and the angle of attack and sideslip dynamics in (5.3).
1
(L() FT sin + mg2 )
mVT cos
1
(Y () FT cos sin + mg3 )
mVT
(5.7)
(5.8)
(5.9)
(5.10a)
(5.10b)
The relationships between (5.10) and the original aircraft dynamics (5.3) can be
found in Table 5.2. If the aircraft is assumed to be built symmetrically such that
the side force, Y , is zero for zero sideslip, we get
f (0, y ) =
using (2.8).
1
g cos sin
VT
(5.11)
56
General system
(5.10)
x1
x2
u
(x1 )
dynamics
(5.3b)(5.3c)
ref
qs + f (ref , y )
u2
f (, y ) f (ref , y )
dynamics
(5.3d)(5.3e)
rs + f (0, y )
u3
f (, y ) f (0, y )
Table 5.2 The relationships between the translated general nonlinear system (5.10) and the angle of attack and sideslip dynamics in
(5.3).
(5.12)
The idea is to let the necessary demands on be revealed by the design below.
Temporarily pick the clf
W (x1 ) =
1 2
x
2 1
57
(5.14)
Remembering the benefits of using a non-quadratic clf (cf. Example 3.2), we select
1 2
2 ) = F (x1 ) + x
V (x1 , x
2 2
as the clf for the total system (5.14), where F is any valid clf for the x1 -subsystem.
Specifically this means that
0
F (x1 )|x2 =xdes
=
F
(x
)
(x
)
(x
)
= U (x1 )
(5.15)
1
1
1
2
where U (x1 ) is positive definite. Differentiating V w.r.t. time we get
2
2 u + 0 (x1 ) (x1 ) (x1 ) + x
V = F 0 (x1 ) (x1 ) (x1 ) + x2 + x
x2
= U (x1 ) + x2 F 0 (x1 ) + u + 0 (x1 ) (x1 ) (x1 ) + 0 (x1 )
We can reduce the complexity of the second term by selecting F such that the x1
terms inside the brackets cancel each other. This is achieved by
F 0 (x1 ) = 0 (x1 ) (x1 ) (x1 ) , F (0) = 0
Inserting this into (5.15) yields
2
U (x1 ) = 0 (x1 ) (x1 ) (x1 )
For U (x1 ) to be positive definite, must satisfy
0 (x1 ) > 0,
x1 6= 0
(5.16)
58
()
xdes
2
x2
x 2 = u
x1
x 1 = (x1 ) + x2
Figure 5.2 The nonlinear system 5.10 can be globally stabilized through
a cascaded control structure.
where
k > max 0 (x1 )
x1
(5.18)
renders
V = U (x1 ) (k 0 (x1 ))
x22
negative definite, thus making the origin of (5.10) GAS. Note the cascaded structure
of the control law as illustrated in Figure 5.2. This way of viewing the control law
motivates the condition (5.18). The condition apparently states that the inner
control loop must have a higher feedback gain than the outer loop.
Before we conclude, let us investigate which system nonlinearities can be
handled using the control law (5.17). If 0 (x1 ) is upper bounded by k as in (5.18),
then the growth rate of must also be bounded in the sense that
(x1 )
max 0 (x1 ) < k
x1
x1
and hence, must be confined to the sectors shown in Figure 5.3(b). Dividing
(5.13) by x21 and inserting this inequality yields
(x1 )
(x1 )
<
<k
x1
x1
Thus, for the control law (5.17) to be applicable, the growth of the system nonlinearity must also be linearly upper bounded as depicted in Figure 5.3(a). Conversely, given an upper bound on (x1 )/x1 we can always find a and a k such
that (5.17) is globally stabilizing.
Let us summarize our findings as a proposition.
Proposition 5.1
Consider the system
x 1 = (x1 ) + x2
(5.19a)
x 2 = u
(5.19b)
59
0.5
0.5
0.5
1
1
0.5
0.5
x1
0.5
1
1
0.5
x1
0.5
(5.20)
(5.21)
x1 6= 0
(5.22)
and
0 < 0 (x1 ) < k
(5.23)
60
Inverse optimality
Proposition 5.1 gives us a family of control laws which all globally stabilize the
system of interest. Before discussing which choices of might be of interest, let
us further examine the properties of the derived control law. Using the tools of
Chapter 4 we will show that (5.21) is actually optimal with respect to a meaningful
cost functional, provided that k is chosen properly.
The control input enters the system (5.10) affinely and thus the tools of Chapter
4 can be used. However, we will use the transformed system (5.14) to compute the
cost functional that is minimized since the expressions become simpler. Comparing
(5.14) with (4.2), we have that
(x1 ) (x1 ) + x
2
0
f (x) =
, g(x) =
0 (x1 )((x1 ) (x1 ) + x
2 )
1
We also have that
Vx = 0 (x1 )((x1 ) (x1 )) x2
R(x) =
Apparently, to make q(x) positive definite, which is required for the cost functional
to be meaningful, k should be chosen such that
k > 2 max 0 (x1 )
x1
Note that this lower limit for inverse optimality is twice the limit in (5.23) regarding
global stability. This is natural considering the 50% gain reduction robustness of
all optimal controllers, cf. Chapter 4.
Proposition 5.2
The control law (5.21) is optimal w.r.t. a meaningful cost functional for
k > 2 max 0 (x1 )
x1
61
x1 6= 0
k1 >
Note how little information about the system nonlinearity this control law is
dependent of. Only an upper bound on its growth rate, , is needed. In particular,
if is known to lie in the second and fourth quadrants only, thus intuitively being
useful for stabilizing x1 , we do not need any further information, since then < 0
and the parameter restriction k1 > 0 becomes active.
Corollary 5.2 (Linearizing control)
Consider the system (5.19). A globally stabilizing control, partially linearizing the
dynamics, is given by
u = k2 (x2 + k1 x1 + (x1 ))
where
k1 > max{0, min 0 (x1 )}
x1
and
k2 > k1 + max 0 (x1 )
x1
(5.24)
62
provided that such upper and lower bounds on 0 exist. In addition, for k2 >
2(k1 + maxx1 0 (x1 )) the control law minimizes a meaningful cost functional given
by
Z
k2
1 2
(k1 + 0 (x1 ))k12 x21 + ( k1 0 (x1 ))(x2 + k1 x1 + (x1 ))2 +
u dt
2
2k2
0
Proof Selecting (x1 ) = k1 x1 + (x1 ) and k = k2 yields u = k2 (x2 + k1 x1 + (x1 ))
and the conditions
(5.22): ((x1 ) k1 x1 (x1 ))x1 = k1 x21 < 0,
x1 6= 0
k1 > 0
x1 6= 0
x1 6= 0
5.1.3
Let us now return to the flight control context and apply the control laws derived
in the previous section to the aircraft dynamics in (5.3). The boxed control laws
below are the ones that will be implemented and evaluated in Chapter 7.
Angle of attack control
Let us first apply the linear control law in Corollary 5.1. Using Tables 5.1 and 5.2
for translation from x to the proper aircraft variables yields
u2 = k,2 (qs + k,1 ( ref ) + f (ref , y ))
(5.25)
63
u2
ref
Prefilter
, qs
dynamics
(k1 k2 k2 )
Figure 5.4 The backstepping control law (5.25) moves the dependence on
CL outside the control loop, thereby enhancing the robustness.
with f from (5.7). The control law is illustrated in Figure 5.4. Although implementing this control law requires knowledge of the lift force, and thereby the
lift force coefficient, CL , we note that the lift force dependent computation is performed in the prefilter outside the feedback loop. Therefore, imperfect knowledge
of CL does not jeopardize closed loop stability but only shifts the equilibrium.
The parameters should satisfy
k,2 > 2 k,1 ,
for the control law to be globally stabilizing and also minimize a meaningful cost
functional. Here,
= max
x1
(x1 )
f (, y ) f (ref , y )
= max
x1
ref
,ref ,y
The maximum occurs when ref is chosen to be the point where f has the highest
positive slope and is selected infinitely close to ref . Then, the fraction above
turns into the derivative w.r.t. , i.e.,
= max
,y
f (, y )
1
dCL
(h)VT S
L
() = max
max
()
,V
,h
mVT cos
2m cos
d
T
1
(h)VT2 SCL
2
Since dCL /d is negative in the post-stall region, see Figure 5.1, will be positive.
This means that the higher speed VT , and the larger sideslip one wants the control
64
law to handle, the higher becomes, and the higher the control law parameters
k,1 and k,2 must be chosen.
To solve this, we impose that y y , where y R6 is selected to represent
the flight envelope of interest. For instance, a practically valid assumption may be
that the sideslip is always less than 10 degrees. The final expression for then
becomes
f (, y )
= max
Let us now instead apply the linearizing control law in Corollary 5.2. Using
Table 5.2 we get
u2 = k,2 (qs + k,1 ( ref ) + f (, y ))
(5.26)
The only difference to (5.25) is that now f takes as its first argument rather
than ref . This causes the feedback loop to depend on CL , and robustness against
model errors in CL becomes more difficult to analyze.
Somewhat surprisingly, the control law in (5.26) is identical1 to the one in (2.15),
which was derived using dynamic inversion and time-scale separation arguments.
Using our Lyapunov based backstepping approach, we have thus shown this control
law to be not only globally stabilizing, but also inverse optimal w.r.t. a meaningful
cost functional, according to Corollary 5.2.
Sideslip regulation
Applying the linear control law in Corollary 5.1, using Table 5.2 along with (5.8)
and (5.11) yields
u3 = k,2 (rs + k,1 +
1
g cos sin )
VT
(5.27)
ensure the control law to be globally stabilizing and to be optimal w.r.t. a meaningful cost functional. Here,
= max
x1
(x1 )
f (, y ) f (0, y )
= max
,y
x1
The first two terms of f in (5.8) both give negative contributions to . Since these
typically are superior to the gravity contribution, < 0 holds and the parameter
restrictions above simply reduce to
k,2 > 2 k,1 > 0
Thus, precise knowledge of the side force is not necessary to implement the globally
stabilizing control law (5.27).
1 The
slight difference is due to that (2.15) was derived using somewhat simplified dynamics.
65
(5.28)
5.1.4
Practical issues
Let us now turn to some practically relevant issues regarding the application of the
derived flight control laws.
Tuning
How should the control law parameters kps , k,1 , k,2 , k,1 , and k,2 be selected?
For roll control, kps can be chosen to satisfy a given requirement on the roll time
constant, which becomes 1/kps .
For control, the closed loop system will not be linear since the nonlinear lift
force L is not cancelled by the control law (5.25). However, since the control law is
linear in and qs , it is tempting to still use linear techniques. A natural procedure is
to linearize the angle of attack dynamics (5.3b)(5.3c) around a suitable operating
point and then select k,1 and k,2 to achieve some desired linear closed loop
behavior locally around the operating point.
For regulation, the situation is the same. Here, k,1 and k,2 , determining the
control law (5.27), can be selected by choosing some desired closed loop behavior
using a linearization of the sideslip dynamics (5.3d)(5.3e).
Saturation
The control laws (5.25), (5.27), and (5.28) all can handle a certain amount of
gain reduction and still remain stabilizing. Thus, even in cases where actuator
saturation makes the moment equation (5.5) infeasible, the control laws remain
stabilizing within certain bounds. The maximum moment that can be produced
depends on the aircraft state, see Figure 2.6. This makes it difficult to determine the
exact part of the state space (in terms of , , etc.) where stability is guaranteed,
and we will not further pursue this issue.
5.2
We now turn to flight path angle control. This is not a standard autopilot function, but may still be of interest, e.g., for controlling the ascent or descent of an
unmanned aerial vehicle.
66
5.2.1
The objective is for the flight path angle, depicted in Figure 2.7, to follow a given
command, ref . In other words, we want to make
= ref
a globally asymptotically stable equilibrium.
We consider only the longitudinal motion of the aircraft, assuming that the
roll and sideslip angles are zero. Again, speed control is assumed to be handled
separately. The relevant dynamics are then given by (2.9b)(2.9d). Using the
definition of the flight path angle, = , yields
1
(L + FT sin mg cos )
mVT
= q
1
q = (M + FT ZT P )
Iy
(5.29)
1
(M + FT ZT P )
Iy
(5.30)
(5.31a)
(5.31b)
(5.31c)
The control law will be derived considering u as the input. The relationship to
the true control input, , affecting the pitching moment, M , is given by
M () = Iy u FT ZT P
In Chapter 7 we discuss how to solve for given u.
(5.32)
5.2.2
67
(5.33)
1
(L() + FT sin mg cos ref)
mVT
0 is the angle of attack at steady state, solving = (0) = 0. This gives us the
dynamics
x 1 = (x2 x1 )
x 2 = x3
(5.34a)
(5.34b)
x 3 = u
(5.34c)
x 6= 0
(5.35)
In the forthcoming design, we will show that this scarce information suffices to
construct a globally stabilizing control law.
The backstepping design
The key idea of the forthcoming design is the following. For x2 = 0, we get
x 1 = (x1 ), which acts stabilizing since (x1 ) is (x1 ) mirrored about the
y-axis, and thus lies in the second and fourth quadrants. Using backstepping, we
will show how to utilize this inherent stability property.
Below, ki are constants which parameterize the control law, while ci are dummy
constants whose values will be assigned during the derivation to simplify various
expressions.
Step 1: As usual, start by considering only the x1 -subsystem (5.34a). To find a
globally stabilizing virtual control law, we use the clf
V1 =
1 2
x
2 1
(5.36)
68
x1 6= 0
k1 > 1
(5.37)
The fact that k1 = 0 is a valid choice means that x1 feedback is not necessary
for the sake of stabilization. However, it provides an extra degree of freedom for
tuning the closed loop performance.
Step 2: Since we cannot control x2 directly, we continue by introducing the
deviation from the virtual control law.
x
2 = x2 xdes
2 = x2 + k1 x1
Including the x2 dynamics (5.34b) we get
x 1 = ()
x
2 = x3 + k1 ()
where
2
= (1 + k1 )x1 + x
(5.38)
c1 2 1 2
x + x
+ F (),
2 1 2 2
c1 > 0
We compute its time derivative to find a new virtual control law, xdes
3 .
2 (x3 + k1 ()) + F 0 ()(() + x3 )
V 2 = c1 x1 () + x
= (c1 x1 + k1 x
2 F 0 ())() + (
x2 + F 0 ())xdes
x2 + F 0 ())(x3 xdes
3 + (
3 )
Although it may not be transparent, we can again find a stabilizing function independent of . Choosing
xdes
2 ,
3 = k2 x
0
F () = c2 (),
k2 > 0
F (0) = 0,
(5.39)
c2 > 0
(5.40)
69
To make the first term negative definite using (5.35), we select c1 to make the
factor in front of () proportional to , see (5.38). This is achieved by
c1 = (1 + k1 )(k1 k2 c2 ),
k2 c2 > k1
(5.41)
(5.42)
Furthermore,
= () + x
3 k2 x
2
V3 is constructed by adding a term penalizing x3 to V2 .
1 2
,
V3 = c3 V2 + x
2 3
c3 > 0
We get
22 + x
3 (
x2 + c2 ())
V 3 = c3 (k1 k2 c2 )() c2 ()2 k2 x
|
{z
}
negative definite
x3 k2 x
2 + k1 ())
+x
3 u + k2 (
c2 c3 2 () k2 c3 x
22 + x3 u + k2 x
3 + (c3 k22 )
x2 + (k1 k2 + c2 c3 )()
2 x
3 cross-term and try yet
once again using (5.35). Select c3 = k22 to cancel the x
another linear control law.
u = k3 x
3 ,
k3 > k2
(5.43)
70
is a natural candidate and with this we investigate the resulting clf time derivative.
V 3 k22 c2 2 () k23 x
22 (k3 k2 )
x23 + (k1 k2 + k22 c2 )
x3 ()
In order to investigate the impact of the last cross-term, we complete the squares.
k1 k2 + k22 c2
())2
22 (k3 k2 )(
x3
V 3 k23 x
2(k3 k2 )
(k1 k2 + k22 c2 )2 2
) ()
(k22 c2
4(k3 k2 )
V 3 is negative definite provided that the 2 () coefficient is negative, which is true
for
(k1 + k2 c2 )2
)
(5.44)
k3 > k2 (1 +
4k2 c2
We now pick c2 to minimize this lower limit under the constraints c2 > 0 and
k2 c2 > k1 .
For k1 0, we can make k1 + k2 c2 arbitrarily small whereby Equation (5.44)
reduces to
k3 > k2
i.e., the same restriction as in (5.43). For k1 > 0 the optimal strategy can be shown
to be selecting c2 arbitrarily close to the bound k1 /k2 . This yields
k3 > k2 (1 + k1 )
(5.45)
Let us summarize this lengthy control law derivation, which resulted in a globally stabilizing control law (5.43) for the system (5.34), under the parameter restrictions in (5.37), (5.39), (5.43), and (5.45).
Proposition 5.3
Consider the system
x 1 = (x2 x1 )
x 2 = x3
x 3 = u
(5.46)
(5.47)
k1 > 1
k2 > 0
k1 0
k2
k3 >
k2 (1 + k1 ) k1 > 0
(5.48)
where
The cascaded structure of the control law is illustrated in Figure 5.5.
k1
xdes
2
k2
xdes
3
71
u
k3
x2
x3
x 3 = u
x 2 = x3
x 1 = (x2 x1 )
x1
Figure 5.5 The nonlinear system (5.46) can be globally stabilized through
a cascaded control structure.
Robustness
For this design we refrain from investigating inverse optimality due to the complicated expressions involved. Regardless of this, it is clear that the control law
(5.47) gives a certain gain margin. E.g., in the case of k1 > 0, we can afford a gain
reduction of
>
k2 (1 + k1 )
k3
(5.49)
at the input (see Figure 4.1) without violating k3 > k2 (1 + k1 ) from (5.48).
Backstepping vs. feedback linearization
It is rewarding to compare the preceding backstepping design with a control design
based on feedback linearization. Such a design makes the open loop system a chain
of integrators by defining new coordinates according to
2
x 1 = (x2 x1 ) = x
0
x
2 ) = x3
2 = (x2 x1 )(x3 x
00
2 )2 + 0 (x2 x1 )(u x
3 ) = u
x
3 = (x2 x1 )(x3 x
We can now select
u
= k1
k2
k3
x1
x
2
x
3
u
00 (x2 x1 )(x3 x
2 )2
0 (x2 x1 )
(5.50)
72
Two things are worth noting about this expression. Firstly, it depends not only
on (through x
2 ), but also on its first and second derivatives, which therefore
must be known. In the aircraft control case, this corresponds to very accurate
knowledge of the lift force, see (5.33). Since L() in practice comes with certain
model error, especially at high angles of attack, the estimates of L0 () and L00 ()
may be poor. This means that the nonlinear system behavior cannot be cancelled
completely. Unfortunately, it is difficult to analyze the robustness of (5.50), i.e.,
how incomplete cancellation of the nonlinearities affects the controlled system.
Secondly, 0 is in the denominator of (5.50) implying that the control law has
a singularity where 0 = 0. In the aircraft case, this occurs around the stall angle,
where the lift force no longer increases with , see Figure 5.1. Thus, global stability
cannot be achieved using feedback linearization.
Our backstepping design did not suffer from any of the problems above, since
all we required from was for x(x) to be positive definite, see (5.35).
5.2.3
We now return to the flight control context. Expressing (5.47) in the original
coordinates using (5.33) gives us
u = k3 (q + k2 ( + k1 ( ref ) ref 0 ))
(5.51)
This control law is globally stabilizing provided that k1 , k2 , and k3 satisfy (5.48).
Recall that 0 is the angle of attack at steady state, solving =0
in (5.31a).
5.2.4
Practical issues
We now investigate some practical issues regarding the application of this control
law.
Tuning
For selecting the controller parameters, k1 , k2 , and k3 , we can use the same strategy
as for tuning the general maneuvering control laws, cf. Section 5.1.4. This means
linearizing the dynamics (5.31) around a suitable operating point and selecting the
controller parameters to achieve some desired linear closed loop behavior locally
around this operating point.
Saturation
As shown in Section 5.2.2, the control law (5.51) remains stabilizing in the presence of a gain certain reduction, given by (5.49) in the case k1 > 0. Thus, even
for a certain amount of control surface saturation, such that the desired pitching
moment in (5.32) cannot be produced, the closed loop system is stable. As noted
73
in Section 5.1.4, the maximum moment, M , depends not only on the control surface deflections, , but also on the angle of attack, see Figure 2.6. Again, this
makes is difficult to determine the part of the state space within which stability is
guaranteed.
74
6
Adapting to Input
Nonlinearities and Uncertainties
The flight control laws developed in Chapter 5 consider the angular accelerations
as the control input, u. To find the corresponding control surface deflections, , the
mapping from to u must be completely known. Typically this is not the case in
practice. For example, the aerodynamic moment coefficients suffer from inevitable
model errors, and the moment contributions from, e.g., the engine thrust, may
not be measurable. Hence, it is necessary to add integral action in some form to
the control laws to reach the desired equilibrium despite such model imperfections.
This is the topic of the chapter.
In Section 6.1, we further illustrate the problem, and in Section 6.2, we present
its mathematical formulation. Two solutions, based on adaptive backstepping and
nonlinear observer techniques, respectively, are proposed in Sections 6.3 and 6.4.
The two solutions are evaluated in Section 6.5 using a water tank example, and in
Section 6.6 the adaptive schemes to be used for flight control are explicitly stated.
6.1
Background
Many of todays constructive nonlinear control design methods assume the control
input to enter the system dynamics affinely, i.e., for the model to be of the form
x = f (x) + g(x)u
75
76
In many practical cases this is not true. A common solution, see, e.g., [37, 47], is
to find some other entity, a virtual control input v, that does enter the dynamics
linearly, and that depends on the true control input u through a static mapping.
Using, e.g., backstepping or feedback linearization, a globally stabilizing control
law v = k(x) can then be derived. These virtual control inputs are often physical
entities like forces, torques, or flows, while the true input might be the deflection of
a control surface in a flight control case or the throttle setting in an engine control
case.
The remaining problem, how to find which actual control input u to apply, is
often very briefly discussed, typically assuming that the mapping from u to v is
completely known and invertible. In this chapter we investigate the case where the
mapping is only partially known. It might be that the true mapping is too complex
to identify, or that other sources than u contribute to v. Friction might for example
reduce the net torque in a robot control case. Here, we will pragmatically model
the discrepancy between the model and the true mapping as a constant bias. We
propose two different ways of adapting to the bias, and for each case, the issue of
closed loop stability is investigated.
6.2
Problem Formulation
(6.1)
T
1
(6.2)
is such that only the last state, xn , is directly affected by the control input through
x n = fn (x) + g(x, u)
Assume that the mapping, g(x, u), from the true control input, u, to the virtual
control input, v, is not completely known but only a model, g(x, u), such that
g(x, u) = g(x, u) + e
The model error, e = g(x, u) g(x, u), is modeled as a constant. This pragmatic
assumption may be more or less realistic but allows us to correct for biases and
reach the correct equilibrium at steady state. With this we can rewrite (6.1) as
x = f (x) + B(w + e)
(6.3a)
w = g(x, u)
(6.3b)
w is the part of the virtual control input, v, that we are truly in control of.
77
e
+
k(x)
+
v
x = f (x) + Bv
Estimator
(6.4)
(6.5)
(6.6)
for u. How do we deal with the fact that e is not available? A straightforward solution is to rely on one of the corner stones of adaptive control and use the certainty
equivalence [46] of (6.6). This means that we replace the unknown parameter e by
an estimate e and form
w = g(x, u) = k(x) e
(6.7)
Figure 6.1 illustrates the approach. The strategy is intuitively appealing but leads
to two important questions:
78
How do we estimate e?
Can we retain global stability using e for feedback?
Two approaches to the problem will be pursued. In Section 6.3, we will use
standard adaptive backstepping techniques to find an estimator that will guarantee
closed loop stability without having to adjust the control law (6.7). In Section 6.4,
the starting point is that a converging estimator is given. The question then is
how to adjust the control law to retain stability. This approach is due to the
author, but was inspired by the observer backstepping techniques introduced by
Kanellakopoulos et al. [39].
6.3
Adaptive Backstepping
Adaptive backstepping [46] deals with the unknown parameter e by extending the
Lyapunov function V (x) with a term penalizing the estimation error e = e e:
Va (x, e) = V (x) +
1 2
e ,
2
>0
V (x)
1
= W (x) + e(
(x, e))
xn
(6.8)
The first term is negative definite according to the assumptions, while the second,
mixed term is indefinite. Since e is not available, the best we can do is to cancel
the second term by selecting
V (x)
xn
(6.9)
x = f (x) + B(k(x) + e)
(6.10)
(x, e) = (x) =
The resulting closed loop system becomes
V (x)
e =
xn
which satisfies
V a (x, e) = W (x)
79
xn (s)ds
0
In this case, estimating e and using the estimate for feedback corresponds to adding
integral action from xn .
6.4
6.4.1
1
= A
0
(6.12)
80
Q = qI,
q>0
according to basic linear systems theory, see Rugh [62]. A is Hurwitz if and only
if
k1 > 0,
k2 > 0
To investigate the closed loop stability, we combine the original Lyapunov function V (x) with T P and form
Vo (x, ) = V (x) + T P
We also augment the control law (6.7) with an extra term, l, to be decided, to
compensate for using e for feedback.
w = k(x) + l(x, e) e
(6.13)
yields
V o = Vx (x) f (x) + B(k(x) + l(x, e) e + e) T Q
W (x) +
V (x)
(l(x, e) + e) q
e2
xn
By choosing
l(x, e) = l(x) =
V (x)
,
xn
>0
(6.14)
xn
2
4
1
To achieve GAS, we must satisfy q 4
> 0, which can always be done once in
(6.14) has been selected, since q is at our disposal.
Let us summarize our discussion.
i = 1, . . . , n 1
x n = fn (x) + w + e
1A
matrix is said to be Hurwitz if all its eigenvalues are in the open left half plane.
(6.15)
81
V (x)
e,
xn
>0
k1 > 0, k2 > 0
6.4.2
Let us consider the case where the original, unattainable control law (6.4) solves
an optimal control problem of the form
Z
V (x) = min
q(x) + r(x)v 2 dt
(6.16)
v
1 V (x)
2r(x) xn
We recall from Section 4.3 that a fundamental property of control laws minimizing
a criterion like (6.16) is that they have a gain margin of [ 12 , ]. This inherent
robustness means that we do not need to modify the certainty equivalence control
law (6.7) to retain stability, since l(x) in Equation (6.14) is proportional to k(x).
To show this we make the split
V (x)
V (x)
1 V (x)
+
k(x) =
2r(x) xn
xn
xn
|
{z
} | {z }
k(x)
l(x)
82
x2
x2
x1
x1
Figure 6.2 Two tanks connected in series.
Now,
v = k(x)
= (
V (x)
1
)
2r(x)
xn
2r(x)
2 2r(x)
4r0
4r(x)
must hold which does not contradict the only previous requirement from (6.14)
that > 0.
An intuitive interpretation of this result is that some of the optimal control effort
can be sacrificed in order to compensate for using the estimate e for feedback.
6.5
Let us apply the two strategies to a practical example to investigate their pros
and cons. Consider the two tanks in Figure 6.2. The control goal is to achieve a
83
certain water level r in the bottom tank. Using Bernoullis equation and setting
all constants to unity, the system dynamics become
x 1 = x1 + x2
x 2 = x2 + v
where x1 = water level of the lower tank, x2 = water level of the upper tank, and
v = incoming water flow. v is produced by changing the aperture of the valve of
the input pipe.
We assume the dynamics of the valve to be very fast compared to the dynamics
of the tanks, so that the relationship between the commanded aperture radius, u,
and the water flow, v, can be regarded as static. Assuming some external water
supply to keep a constant pressure, v will be proportional to the aperture opening
area, which in turn depends on u2 . Again setting all constants to unity we would
have v = u2 . In order to be able to account for a possible model error in this static
relationship and for other sources contributing to the net inflow, e.g., leakage, we
assign the model
v = u2 + e
in accordance with (6.3a).
The first step is to find a globally stabilizing control law v = k(x). We do this
using an ad hoc Lyapunov approach. At the desired steady state, x1 = x2 = r.
Therefore consider the control Lyapunov function
V (x) =
1
a
(x1 r)2 + (x2 r)2 ,
2
2
a>0
a
(r x1 ) + b(r x2 ),
r+
x2 + r
a > 0, b 0
(6.17)
>0
(6.18)
84
Adaptive
backstepping
= 0.3
k(x)
a=1
b = 0.5
Observer
based adaption
k1 = 1
k2 = 0.5
=0
(r x2 (s))ds
w = u = k(x) +
0
Using the observer based approach, the estimator can be designed according to
(6.11). For the actual implementation, we can rewrite this as
d xn
k1 1
xn
1
=
+
( x2 + w)
e
k2 0
0
dt e
which can be implemented using, e.g., Simulink. The implicit control law (6.13)
becomes
w = u2 = k(x) + (r x2 ) e,
>0
If b > 0 was selected in the control law (6.17), we do not have to add the term
(r x2 ) for the sake of stability, since it can be seen as a part of k(x) already. As
in the optimal control case treated in Section 6.4.2, closed loop is then guaranteed
using the original certainty equivalence control law (6.7) without any modification.
In the simulations, the parameter values for the two adaptive controllers were
selected according to Table 6.1. The initial water level, which is also fed to the
observer, is 1 in both tanks. The control goal is to for x1 to reach the reference level
r = 4 and maintain this despite the leakage e = 3 starting at t = 25 s. Figure
6.3 shows the actual control input and the water level of the lower tank when
no adaption is used. Figures 6.4 and 6.5 show the results of applying adaptive
backstepping and observer based adaption, respectively.
There is a striking difference between the initial behaviors of the two leakage
estimates. As pointed out in Section 6.4.1, the observer e estimate evolves independently of u and x. Since (0) = 0, the estimation error remains zero until the
leakage starts. The adaptive backstepping estimate on the other hand depends on
the integral of r x2 over time, causing an oscillatory behavior due to the initial
error of the upper tank water level.
Also, in the presence of actuator saturation, adaptive backstepping will suffer
from the windup problems that generally occur when using integral action in the
feedback loop. This is avoided with the observer based approach if the observer is
fed with the true, saturated value of the control input.
85
u
2.6
x1
4.5
2.4
2.2
3.5
2
1.8
1.6
2.5
1.4
1.2
1.5
1
0.8
0
10
20
30
time (s)
40
50
1
0
10
20
30
time (s)
40
50
x1
4.5
2.4
e
1.5
1
2.2
0.5
3.5
2
1.8
1.6
2.5
1.4
0
0.5
1
1.5
1.2
1.5
1
0.8
0
2.5
10
20
30
time (s)
40
50
1
0
10
20
30
time (s)
40
50
3.5
0
10
20
30
time (s)
40
50
40
50
4.5
2.4
x1
e
1.5
1
2.2
0.5
3.5
2
1.8
1.6
2.5
1.4
0
0.5
1
1.5
1.2
1.5
1
0.8
0
2.5
10
20
30
time (s)
40
50
1
0
10
20
30
time (s)
40
50
3.5
0
10
20
30
time (s)
86
6.6
Considering the appealing properties of the observer based adaption, as demonstrated in the previous section, this will be our choice of adaptive scheme for flight
control. All of the state feedback laws to be used along with the observers, (5.25),
(5.27), (5.28), and (5.51), can afford a certain amount of gain reduction and still be
globally stabilizing, as shown in Chapter 5. As shown in Section 6.4.2, this means
that the certainty equivalence control law (6.7) is globally stabilizing.
Let us now state the resulting observers when Proposition 6.1 is applied to the
flight dynamics used for control design in Chapter 5.
6.6.1
General maneuvering
The relevant dynamics are given by (5.3). To handle unmodeled nonlinearities and
uncertainties in the mapping (5.4) from to u, we redefine (5.3a), (5.3c), and (5.3e)
as
ps = u1 + e1
qs = u2 + e2
rs = u3 + e3
(6.19)
L1 , L2 , and L3 are 2 1-vectors whose entries must be positive for the estimates
to converge.
6.6.2
7
Implementation and Simulation
The control designs of the preceding chapters were based on a number of pragmatic,
simplifying assumptions regarding the aircraft dynamics and the types of model
errors and uncertainties to be handled. Using computer simulations we will now
evaluate the control laws experimentally.
The aircraft models used for simulation are presented in Section 7.1. Some
implementation details are covered in Section 7.2, while Section 7.3 is devoted to
the actual computer simulations.
7.1
Due to the complexity of the dynamics and the military nature of the field, there
exist only a few available aircraft simulation models. Three of these are listed in
Table 7.1.
7.1.1
GAM/ADMIRE
The Generic Aerodata Model (GAM) contains aerodynamic data for a small fighter
aircraft, not unlike JAS 39 Gripen [63]. The model was produced and made available by Saab AB, Linkoping, Sweden. A model description is given by Backstr
om
[5], and the package can be downloaded from [64]. A disadvantage of the GAM is
87
88
Simulation
model
GAM/ADMIRE
HIRM
FDC
Publicly
available
Yes
Not in general
Yes
Fighter aircraft
dynamics
Yes
Yes
No
7.1.2
HIRM
The High Incidence Research Model (HIRM) was developed by DERA1 of the
United Kingdom. The model is based on aerodynamic data from wind tunnel
tests and drop tests of a small-scale model. The HIRM was then derived by scaling up these data to create an aircraft of F-18 proportions, see Appendix 7.A.
Aerodynamic data exist for a wide range of angles of attack and sideslip angles,
50 120 and 50 50 .
The HIRM was used as a benchmark fighter aircraft model in the robust flight
control design challenge [53] initiated by GARTEUR2 . A technical description of
the HIRM, which is implemented as Simulink model, can be found in [56].
We will use the HIRM to evaluate the flight path angle control law developed
in Section 5.2. DERA is gratefully acknowledged for granting permission to use
the model for this purpose.
7.1.3
FDC
The Flight Dynamics and Control toolbox (FDC) by Rauw [59] is a Simulink
toolbox for general flight dynamics and control analysis. The toolbox comes with
aerodata from a small, non-military aircraft, but the modular structure of the
toolbox allows the user to plug in external aerodata of his or her choice. Although
promising, FDC has not been used for simulation since for the GAM and HIRM
aerodata, well functioning interfaces already exist.
1 Defence
2 Group
ref , pref
s
89
Backstepping k(x)
control laws
udes
Control
allocation
u
e
Bias
estimation
7.2
Controller Implementation
The controller configuration is shown in Figure 7.1. The backstepping block contains the state feedback control laws derived in Chapter 5, while the bias estimator
block contains the nonlinear observers from Chapter 6, which are used to estimate
and adapt to unmodeled moments acting on the aircraft. The control allocation
block will be discussed in Section 7.2.1. Its function is to translate the desired
angular acceleration,
udes = k(x) e
into actual control surface deflections, . The signal u, which is fed to the estimator,
is the actual angular acceleration, which differs from udes when udes is not feasible.
By feeding the actual value u to the estimator we avoid wind-up problems.
The observers used are compactly stated in Section 6.6. Let us for convenience
also gather the control laws from Chapter 5. For general maneuvering the control
laws are made up by (5.25), (5.27), and (5.28). Using k(x) to denote these control
laws, we get
T
k(x) = k1 (x) k2 (x) k3 (x)
where
k1 (x) = kps (pref
s ps )
k2 (x) = k,2 (qs + k,1 ( ref ) + f (ref , y ))
1
g cos sin )
k3 (x) = k,2 (rs + k,1 +
VT
with f from (5.7). The flight path angle control law (5.51) reads
k(x) = k3 (q + k2 ( + k1 ( ref ) ref 0 ))
90
7.2.1
Control allocation
The control designs in Chapters 5 and 6 considered the angular accelerations of the
aircraft as the control inputs, see (5.4) and (5.30). In (5.5) and (5.32) we solved
for the actual moments to be produced. In most nonlinear aircraft designs it is
assumed that the control surface deflections, , affect these moments linearly. In
practice this is not true, since the aerodynamic moments produced by the control
surfaces also suffer from stall effects similar to the ones that are well known for the
lift force. This can be seen in Figure 2.6 where the pitching coefficient, Cm , tends
to saturate for low and high values of the symmetrical elevon deflection, es .
Here, we will take into account the nonlinear mapping from to the aerodynamic moments and propose numerical algorithms for finding the proper given
the moments to be produced. The implementations used are due to Press et al.
[58].
General maneuvering
The equation to be solved for is given by (5.5):
T des
M() = IRsb
u + I
Using the definition of M from Section 2.4, M() can be translated into the desired
aerodynamic moment coefficients,
C des = Cldes
des
Cm
Cndes
T
Introducing
T
C() = Cl () Cm () Cn ()
(7.1)
(7.2)
For simplicity we will only make use of the elevons and the rudder and ignore the
canard wings, see Figure 2.4. Thus, = (es , ed , r ). To handle cases where these
91
control surfaces saturate, and (7.2) cannot be satisfied, we reformulate the control
allocation problem as an optimization problem:
= arg min J()
1
(Iy udes FT ZT P )
qS
c
Again we ignore the canard wings for simplicity, and only make use of the symmetrical elevon deflection, es . The control allocation problem then becomes solving
des
des
Cm (es ) Cm
=0
Cm (es ) = Cm
for es . Unlike above, we will use this formulation for finding es and not use an
optimization framework. In this scalar case, actuator saturation can be handled
separately.
The pitching coefficient, Cm , is usually a complicated function of the arguments
involved. However, given measurements of , q, etc., Cm becomes a close to monotone function of es , see Figure 2.6. The only exception is when the aforementioned
stall effects occur and the moment produced saturates. However, given the aerodata, these regions can be manually removed by virtually saturating es when Cm
saturates.
Many numerical solvers are well adapted to finding the zero of a monotone scalar
function. For the implementation, the Van Wijngaarden-Dekker-Brent method
92
[8, 19] was chosen. This method is a happy marriage between bisection, which ensures convergence, and inverse quadratic interpolation, which provides superlinear
convergence in the best-case scenario.
7.3
Simulation
We now turn to the actual simulations performed. Before showing the actual plots
in Section 7.3.3, Section 7.3.1 gives an account of the conditions surrounding the
simulations, and the controller parameters used are listed in Section 7.3.2.
7.3.1
Conditions
7.3.2
Controller parameters
The controller parameters used for the simulations are given in Tables 7.2 and 7.3.
The state feedback parameters kps , k,1 , k,2 , k,1 , and k,2 have been chosen
according to the guidelines in Sections 5.1.4 and 5.2.4, to achieve suitable linear
dynamics around the initial states of the flight cases presented below. The observer
gains L1 , L2 , and L3 place the poles of the error dynamics (6.12) in 8 i, while
L gives the observer poles 2 i.
7.3 Simulation
kps
L1
93
ps control
2.0
T
16.0 65.0
k,1
k,2
L2
control
2.0
5.0
T
16.0 65.0
k,1
k,2
L3
control
2.0
5.0
T
16.0 65.0
k1
k2
k3
L
control
0.4
1.2
2.1
T
4.0 5.0
7.3.3
Simulation results
General maneuvering
To evaluate the general maneuvering control laws from Section 5.1 we use the
GAM/ADMIRE environment. The simulations are performed at an initial speed
of 0.5 Mach and at an altitude of 1000 m. The assessment maneuvers are the
following:
M1. Roll rate demand, pref
s = 150 deg/s. See Figure 7.2.
M2. Angle of attack demand, ref = 15 deg. See Figure 7.3.
M3. M1 and M2 performed simultaneously. See Figure 7.4. During this maneuver,
both and ps vary rapidly which means that assumption A3 in Section 5.1.1
is violated.
Let us make some comments regarding the simulation results.
During M1 and M2, the controlled variables, ps , , and all follow their
reference trajectories (given by the dashed curves) well.
The increase in the aircraft speed, VT , during M1 is due to our strategy of
rolling about the stability x-axis, rather than the body x-axis, as discussed in
Section 2.2. Since is small, the stability x-axes coincides with the velocity
vector. Initially, = 2.8 deg is necessary to have the lift force make up for
gravity. But after rolling 180 deg without changing , the same amount of
lift force is directed towards Earth, which causes the aircraft to dive and the
speed to increase.
In M2, we see that ps and are not completely unchanged despite the maneuver being constrained to the body xz-plane. The small perturbations are
94
caused by the feedback from the bias estimators. This is also the reason for
the small perturbations of the ed and r input signals.
Shifting our attention to M3 we see that the resulting aircraft trajectory is
not quite the superposition of the M1 and M2 trajectories, in terms of the
controlled variables. The roll rate response is still satisfactory, but and
oscillate more. The reason is (at least) twofold:
1. According to assumption A3 in Section 5.1.1, the controller assumes
to be constant, and vice versa. Since this is not the case here, the
roll axis will not coincide exactly with the stability axis as desired, see
Figure 2.3. As outlined in Section 2.2, this means and are no longer
decoupled, but during the roll, part of the angle of attack turns into
sideslip and vice versa.
2. During the initial phase of the step, the parameter estimates, e1 , e2 , and
e3 oscillate. The 0.2 peak in e2 causes the controller to believe some
external source is contributing to the pitching moment. Consequently,
the pitching moment produced by es is reduced, which leads to a reduction in q and that the increase in is temporarily stopped. Efforts
have been made to tune the observers to avoid these oscillations but the
ultimate solution is yet to be found.
Flight path angle control
The flight path angle control law from Section 5.2 is evaluated using the HIRM.
Here, the initial aircraft state is level flight at Mach 0.3 at an altitude of 1524 m
(5000 ft). A model error in the pitch coefficient, Cm , of 0.03 is introduced on
purpose to examine the robustness of the controller. This is the same error that was
used for evaluating the controllers in the GARTEUR robust flight control design
challenge [53]. The following assessment maneuver is used:
M4. Two consecutive flight path angle demands, ref = 25 deg followed by ref =
15 deg. See Figure 7.5.
Let comment on the simulation results.
Overall, the flight path angle, , follows its reference trajectory well.
The small initial dip of is caused by the model error introduced, which
makes the true pitching moment less than the controller expects. Soon however, the model error is estimated by the observer, as seen in the e plot, and
the controller compensates for the error and brings back to zero after 4
seconds.
The maximum angle of attack is 38 deg which is greater than the HIRM stall
angle, see Figure 5.1. In accordance with the GAS property of the control
law shown in Section 5.2.2, this does not cause any stability problems.
7.3 Simulation
95
During the steep ascent just after 5 s, the elevons saturate at 40 deg, which
means that the desired angular acceleration, udes , cannot be produced. The
actual u can be computed by numerically differentiating q from sensor data.
Doing so, the maximum gain reduction during the saturation period is given
by
min
t
u
0.55
udes
In Section 5.2.2, the backstepping control law was shown to have a certain
amount of gain margin. Using the parameter values of Table 7.3, the bound
(5.49) becomes
k2 (1 + k1 )
= 0.8
k3
Thus, despite violating (5.49), the system converges to the desired state3 .
This indicates that the parameter restrictions (5.48) are sufficient but not
always necessary for closed loop stability.
3 One should also note that the robustness results in Chapter 5 only regard the state feedback
laws, not including the observer feedback.
96
200
3.5
0.5
3
(deg)
100
50
(deg)
ps (deg/s)
150
2.5
0.5
0
50
5
time (s)
10
182
5
time (s)
10
180
6
4
168
5
time (s)
10
5
time (s)
0.6
10
0.2
5
time (s)
10
5
time (s)
10
5
time (s)
10
0.4
0.8
172
170
10
1.2
nz (g)
q (deg/s)
VT (m/s)
174
5
time (s)
1.4
178
176
r (deg)
es (deg)
ed (deg)
0.6
0.8
1
0
1.2
0
2
1.4
0
5
time (s)
1.6
10
1.5
5
time (s)
10
0.05
0.05
0.5
3 (rad/s2)
2 (rad/s2)
0.5
1 (rad/s )
0.1
0.15
0.1
0.15
1
1.5
5
time (s)
10
0.2
5
time (s)
10
0.2
7.3 Simulation
97
0.1
16
0.01
14
0.005
(deg)
0.1
0.2
(deg)
ps (deg/s)
12
10
8
6
0.3
0.4
0.005
4
0
5
time (s)
10
170
40
160
30
150
20
5
time (s)
10
0.01
5
time (s)
10
5
time (s)
10
5
time (s)
10
5
time (s)
10
140
nz (g)
VT (m/s)
q (deg/s)
10
130
120
10
110
5
time (s)
20
10
0.4
5
time (s)
10
20
0.4
15
0.2
0.3
0.2
r (deg)
es (deg)
ed (deg)
10
5
0.2
0.1
0
0.4
0.6
5
time (s)
10
10
5
time (s)
10
0.1
x 10
2 (rad/s2)
1 (rad/s2)
0
5
0.3
0.2
0.2
0.1
0.1
0.1
10
15
20
0.3
3 (rad/s2)
5
time (s)
10
0.1
0.2
0.2
0.3
0.3
0.4
5
time (s)
10
0.4
98
(deg)
100
50
0
50
5
time (s)
14
12
10
8
40
170
30
q (deg/s)
180
VT (m/s)
160
150
5
time (s)
10
5
time (s)
15
5
time (s)
10
20
10
15
r (deg)
es (deg)
ed (deg)
5
time (s)
10
10
5
time (s)
25
10
0.4
0.5
0.2
0.2
0.2
0.4
0.4
1.5
0.6
0.6
0.8
10
5
time (s)
10
0.2
5
time (s)
3 (rad/s )
0.6
0.4
2 (rad/s )
0.6
10
20
5
time (s)
15
1.5
0.5
10
1 (rad/s )
10
0
0
5
time (s)
10
20
10
10
10
10
10
5
time (s)
20
140
130
10
nz (g)
ps (deg/s)
150
16
(deg)
200
5
time (s)
10
0.8
7.3 Simulation
99
30
50
25
40
(deg)
(deg)
20
15
10
5
30
20
0
5
10
time (s)
15
10
20
40
10
time (s)
15
20
10
time (s)
15
20
10
time (s)
15
20
10
time (s)
15
20
40
30
20
q (deg/s)
(deg)
30
20
10
0
10
10
0
10
time (s)
15
20
20
100
2.5
95
2
n (g)
90
1.5
VT (m/s)
105
85
80
0.5
75
10
time (s)
15
20
10
0.1
0.05
0
es (deg)
(rad/s )
0
2
10
20
0.05
0.1
0.15
30
40
0.2
0
10
time (s)
15
20
0.25
Figure 7.5 Assessment maneuver M4: flight path angle demands, ref =
25 followed by ref = 15 . The dashed line in the plot
represents 0 .
Appendix
7.A
Aircraft Data
Aircraft model properties for the GAM [5] and for the HIRM [56].
Entity
mass
moment of inertia
m
Ix
Iy
Iz
Ixz
S
b
c
100
kg
kg m2
kg m2
kg m2
kg m2
m
m
m
GAM
9100.0
21000.0
81000.0
101000.0
2500.0
45.0
10.0
5.20
HIRM
15296.0
163280.0
37.16
11.4
3.511
8
Conclusions
102
Conclusions
Bibliography
[1] Richard J. Adams, James M. Buffington, and Siva S. Banda. Design of nonlinear control laws for high-angle-of-attack flight. Journal of Guidance, Control,
and Dynamics, 17(4):737746, 1994.
[2] Muthana T. Alrifai, Joe H. Chow, and David A. Torrey. A backstepping
nonlinear control approach to switched reluctance motors. In Proceedings of the
37th IEEE Conference on Decision and Control, pages 46524657, December
1998.
[3] David Anderson and Scott Eberhardt. How airplanes fly: A physical description of lift. Sport Aviation, February 1999.
http://www.allstar.fiu.edu/aero/airflylvL3.htm.
[4] Zvi Artstein. Stabilization with relaxed controls. Nonlinear Analysis, Theory,
Methods & Applications, 7(11):11631173, 1983.
[5] Hans Backstr
om. Report on the usage of the Generic Aerodata Model. Technical report, Saab Aircraft AB, May 1997.
[6] Dimitri P. Bertsekas. Dynamic Programming and Optimal Control. Athena
Scientific, 1995.
[7] Jean-Luc Boiffier. The Dynamics of Flight: The Equations. John Wiley &
Sons, 1998.
103
104
Bibliography
Bibliography
105
[21] Randy A. Freeman and Petar V. Kokotovic. Robust Nonlinear Control Design:
State-Space and Lyapunov Techniques. Birkh
auser, 1996.
[22] Randy A. Freeman and James A. Primbs. Control Lyapunov functions: New
ideas from an old source. In Proceedings of the 35th Conference on Decision
and Control, pages 39263931, December 1996.
[23] William L. Garrard, Dale F. Enns, and S. Anthony Snell. Nonlinear feedback
control of highly manoeuvrable aircraft. International Journal of Control, 56
(4):799812, 1992.
[24] S. Torkel Glad. Robustness of nonlinear state feedback - a survey. Automatica,
23(4):425435, 1987.
[25]
Aslaug Grvlen and Thor I. Fossen. Nonlinear control of dynamic positioned
ships using only position feedback: An observer backstepping approach. In
Proceedings of the 35th Conference on Decision and Control, pages 33883393,
December 1996.
[26] James K. Hall and Meir Pachter. Formation maneuvers in three dimensions. In
Proceedings of the 39th Conference on Decision and Control, December 2000.
[27] Ola H
arkeg
ard and S. Torkel Glad. A backstepping design for flight path angle
control. In Proceedings of the 39th Conference on Decision and Control, pages
35703575, Sydney, Australia, December 2000.
[28] Ola H
arkeg
ard and S. Torkel Glad. Control of systems with input nonlinearities and uncertainties: an adaptive approach. Technical Report LiTH-ISYR-2302, Department of Electrical Engineering, Link
opings universitet, SE-581
83 Link
oping, Sweden, October 2000.
[29] Ola H
arkeg
ard and S. Torkel Glad. Flight control design using backstepping.
Technical Report LiTH-ISY-R-2323, Department of Electrical Engineering,
Link
opings universitet, SE-581 83 Link
oping, Sweden, December 2000.
[30] HARV. The NASA High Angle-of-Attack Research Vehicle homepage.
http://www.dfrc.nasa.gov/Projects/HARV.
[31] John Hauser and Ali Jadbabaie. Aggressive maneuvering of a thrust vectored
flying wing: A receding horizon approach. In Proceedings of the 39th Conference on Decision and Control, December 2000.
[32] W. B. Herbst. Future flight technologies. Journal of Aircraft, 17(8):561566,
August 1980.
[33] J. Hu, D. M. Dawson, and K. Anderson. Position control of a brushless DC
motor without velocity measurements. IEE Proceedings of Electric Power
Applications, 142(2):113122, June 1995.
106
Bibliography
[34] Jun Hu, Darren M. Dawson, and Yi Qian. Position tracking control for robot
manipulators driven by induction motors without flux measurements. IEEE
Transactions on Robotics and Automation, 12(3):419438, June 1996.
[35] Chien H. Huang and Gareth J. Knowles. Application of nonlinear control
strategies to aircraft at high angles of attack. In Proceedings of the 29th
Conference on Decision and Control, pages 188193, December 1990.
[36] Alberto Isidori. Nonlinear Control Systems. Springer, third edition, 1995.
[37] Mrdjan Jankovic, Miroslava Jankovic, and Ilya Kolmanovsky. Constructive
Lyapunov control design for turbocharged diesel engines. IEEE Transactions
on Control Systems Technology, 8(2):288299, March 2000.
[38] Zhong-Ping Jiang and Henk Nijmeijer. Tracking control of mobile robots: A
case study in backstepping. Automatica, 33(7):13931399, 1997.
[39] I. Kanellakopoulos, P. V. Kokotovic, and A. S. Morse. A toolkit for nonlinear
feedback design. Systems & Control Letters, 18(2):8392, February 1992.
[40] Hassan K. Khalil. Nonlinear Systems. Prentice-Hall, second edition, 1996.
[41] Petar Kokotovic. Constructive nonlinear control: Progress in the 90s. In
IFAC 1999 Proceedings, pages 4977, 1999.
[42] Petar Kokotovic, Hassan K. Khalil, and John OReilly. Singular Perturbation
Methods in Control: Analysis and Design. Academic Press, 1986.
[43] Petar V. Kokotovic. The joy of feedback: Nonlinear and adaptive. IEEE
Control Systems Magazine, 12(3):717, June 1992.
[44] Arthur J. Krener and Alberto Isidori. Linearization by output injection and
nonlinear observers. Systems & Control Letters, 3:4752, June 1983.
[45] Miroslav Krstic, Dan Fontaine, Petar V. Kokotovic, and James D. Paduano.
Useful nonlinearities and global stabilization of bifurcations in a model of jet
engine surge and stall. IEEE Transactions on Automatic Control, 43(12):
17391745, December 1998.
[46] Miroslav Krstic, Ioannis Kanellakopoulos, and Petar Kokotovic. Nonlinear
and Adaptive Control Design. John Wiley & Sons, 1995.
[47] Miroslav Krstic and Petar V. Kokotovic. Lean backstepping design for a jet engine compressor model. In Proceedings of the 4th IEEE Conference on Control
Applications, pages 10471052, 1995.
[48] Stephen H. Lane and Robert F. Stengel. Flight control design using non-linear
inverse dynamics. Automatica, 24(4):471483, 1988.
[49] Tomas Larsson. Linear quadratic design and -analysis of a fighter stability
augmentation system. Masters thesis, Link
opings universitet, 1997.
Bibliography
107
[50] J. Levine. Are there new industrial perspectives in the control of mechanical systems? In Paul M. Frank, editor, Advances of Control: Highlights of
ECC99, chapter 7, pages 197226. Springer, 1999.
[51] Johan L
ofberg. Backstepping with local LQ performance and global approximation of quadratic performance. In Proceedings of the American Control
Conference, pages 38983902, June 2000.
[52] A. M. Lyapunov. The General Problem of the Stability of Motion. Taylor &
Francis, 1992. English translation of the original publication in Russian from
1892.
[53] Jean-Francois Magni, Samir Bennani, and Jan Terlouw, editors. Robust Flight
Control: A Design Challenge. Springer, 1997.
[54] Donald McLean. Automatic Flight Control Systems. Prentice Hall, 1990.
[55] S. K. Mudge and R. J. Patton. Variable structure control laws for aircraft
manoeuvres. In International Conference on Control, 1988, pages 564568,
1988.
[56] E. A. M. Muir et al. Robust flight control design challenge problem formulation and manual: The high incidence research model (HIRM). Technical
Report TP-088-4, Group for Aeronautical Research and Technology in EURope GARTEUR-FM(AG08), 1997.
[57] Robert C. Nelson. Flight Stability and Automatic Control. McGraw-Hill,
second edition, 1998.
[58] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes in C. Cambridge University Press, second edition,
1992.
[59] Marc O. Rauw. FDC 1.2 - A SIMULINK Environment for Flight Dynamics
and Control Analysis. Zeist, The Netherlands, February 1998.
http://www.dutchroll.com.
[60] Jacob Reiner, Gary J. Balas, and William L. Garrard. Robust dynamic inversion for control of highly maneuverable aircraft. Journal of Guidance, Control,
and Dynamics, 18(1):1824, JanuaryFebruary 1995.
[61] Jacob Reiner, Gary J. Balas, and William L. Garrard. Flight control design
using robust dynamic inversion and time-scale separation. Automatica, 32(11):
14931504, 1996.
[62] Wilson J. Rugh. Linear System Theory. Prentice Hall, second edition, 1996.
[63] SaabBAE Gripen AB. JAS 39 Gripen homepage. http://www.gripen.se.
108
Bibliography
Bibliography
109
[78] Bing-Yu Zhang and Blaise Morton. Robustness analysis of dynamic inversion control laws applied to nonlinear aircraft pitch-axis models. Nonlinear
Analysis, Theory, Methods & Applications, 32(4):501532, 1998.
[79] Kemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control.
Prentice-Hall, 1996.
110
Bibliography