Optimal Control Systems
Classical control system design is generally a trial-and-error
process in which various methods of analysis are used iteratively to
determine the design parameters of an "acceptable" system.
Acceptable performance is generally defined in terms of time and
frequency domain criteria such as rise time, settling time, peak
overshoot, gain and phase margin, and bandwidth. Radi-cally
different performance criteria must be satisfied, however, by the
com- plex, multiple-input, multiple-output systems required to
meet the demands of modern technology. For example, the design
of a spacecraft attitude control system that minimizes fuel
expenditure is not amenable to solution by classical methods. A
new and direct approach to the synthesis of these complex
systems, called optimal control theory, has been made feasible by
the development of the digital computer.
The objective of optimal control theory is to determine the control
signals that will cause a process to satisfy the physical constraints
and at the same time minimize (or maximize) some performance
criterion. Later, we shall give a more explicit mathematical
statement of "the optimal control problem," but first let us
consider the matter of problem formulation.
The formulation of an optimal control problem requires:
1. A mathematical description (or model) of the process to be
controlled.
2. A statement of the physical constraints.
3. Specification of a performance criterion.
The Mathematical Model:
A non-trivial part of any control problem is modeling the process.
The objective is to obtain the simplest mathematical description
that adequately predicts the response of the physical system to all
anticipated inputs
DEFINITION 1-1
A history of control input values during the interval [to, tf] is de-
noted by u and is called a control history, or simply a control.
DEFINITION 1-2
A history of state values in the interval [to, tf] is called a state tra-
jectory and is denoted by x.
Physical Constraints:
After we have selected a mathematical model, the next step is to
define the physical constraints on the state and control values. To
illustrate some typical constraints GIVE EXAMPLES
DEFINITION 1-3
A control history which satisfies the control constraints during the
entire time interval [t_{o}, t_{f}] is called an admissible control
DEFINITION 1-4
A state trajectory which satisfies the state variable constraints
during the entire time interval [t_{o}, t_{f}] is called an admissible
tra-jectory.
The Performance Measure:
In order to evaluate the performance of a system quantitatively,
the designer selects a performance measure. An optimal control is
defined as one that minimizes (or maximizes) the performance
measure. In certain cases the problem statement may clearly
indicate what to select for a performance measure, whereas in
other problems the selection is a subjective matter. For example,
the statement, "Transfer the system from point A to point B as
quickly as possible," clearly indicates that elapsed time is the
performance measure to be minimized. On the other hand, the
statement, "Maintain the position and velocity of the system near
zero with a small expenditure of control energy," does not instantly
suggest a unique performance measure. In such problems the
designer may be required to try several performance measures
before selecting one which yields what he considers to be optimal
performance
The Optimal Control Problem:
The theory developed in the subsequent chapters is aimed at
solving the following problem
Find an admissible control u* which causes the system x°(t)=a(x(t),
u(t), t)
to follow an admissible trajectory x* that minimizes the
performance meas-ure
J = h(x(tf), tf) + int{g(x(t), u(t), t) dt.}
u* is called an optimal control and x* an optimal trajectory.
Several comments are in order here.
First, we may not know in advance that an optimal control exists;
that is, it may be impossible to find a control which (a) is
admissible and (b) causes the system to follow an admissible
trajectory. Since existence theorems are in rather short supply, we
shall, in most cases, attempt to find an optimal control rather than
try to prove that one exists.
Second, even if an optimal control exists, it may not be unique.
Nonunique optimal controls may complicate computational
procedures, but they do allow the possibility of choosing among
several controller configurations. This is certainly helpful to the
designer, because he can then consider other factors, such as cost,
size, reliability, etc., which may not have been included in the
performance measure