Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
10 views17 pages

Sheeet PHD

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views17 pages

Sheeet PHD

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Q.

1 Consider the control system


ẍ−2 ¿ ¿
a. Write the system in first-order state-space form.
b. Suppose u(t)= 0. Find and classify (using linearization) all
equilibria and
determine if they are stable or asymptotically stable if possible.
Discuss if the
stability results are global or local.
c. Show that Eq. (1) satisfies the periodic solution
x (t )=cos ( t ) , u ( t )=cos ⁡(2 t).
d. Design a state-feedback controller u = u (x, ẋ ) for (1), such that
the origin of
the closed loop system is globally asymptotically stable.

Q. -
Q.2.C
onside
r the

following nonlinear optimal control problem describing a second-


order dynamical system. The objective is to determine the optimal
¿
control input u (t) hat minimizes the performance index:
2.5
J (u)=∫ (x ¿ ¿ 1 +u )dt ¿
2 2

Subject to the nonlinear system dynamics


x˙1= x 2 x 1 (t = 0) = −5

x˙2 = − x 1 +( 2 − 0.1 x 22 ) x 2 + 4u , x 2 (t = 0) = −5

This is a free-endpoint problem with no terminal cost (i.e., φ =0).


(a) Define the Hamiltonian of the system.
(b) Derive the necessary conditions for optimality using Pontryagin’s
Minimum Principle.
¿
(c) Express the optimal control lawu (t) in terms of the costate variable.
(d) State the full boundary-value problem (BVP) consisting of the state
and costate equations and boundary conditions.
Solution
(a) Define the Hamiltonian.

According to PMP, the optimal control minimizes the Hamiltonian:


T
H ( x ,u , λ)=L+ λ f
2 2 2
H (x ,u , λ)=x1 +u + λ1 x 2 + λ2 (− x1 +(2−0.1 x 2) x2 + 4 u)
b) Derive the Necessary Conditions for Optimality. Using Pontryagin's Minimum Principle, the
optimal control u∗(t) must satisfy

∂H ¿
=2u+ 4 λ 2=0 ⇒ u (t)=−2 λ 2 (t)
∂u
(c) Derive the Costate Equations. We compute the partial derivatives of the Hamiltonian with
respect to the state variables from ( λ̇=−(∂ H /∂ x )T ∧λ(tf )=( ∂ φ/∂ x)T . we get

−∂ H 1
λ ˙ 1= =−2 x 1+ λ2
∂ x1
−∂ H 2
λ ˙ 2= =− λ1−λ 2 (2−0.3 x 2)
∂ x2

(d) Final Formulation: Boundary-Value Problem (BVP)


Q.3- Consider the following nonlinear optimal control problem
describing a second-order dynamical system. The objective is to
¿
determine the optimal control input u (t) that minimizes the
performance index:

tf =2
J (u)=¿ x 2 (tf =2) ∫ 0 dt
0

Subject to the nonlinear system dynamics


−6
d x1
=−2 e u u x 1 starting ¿ x1 ( t=0 ) =0. 9
dt
−6 −11
d x2
=2 e u x 1−5 e u x 2 , starting ¿ x 2 (t=0 )=0 .
dt
This is a free-endpoint problem with no terminal cost (i.e., φ =0).
(a) Define the Hamiltonian of the system.
(b) Derive the necessary conditions for optimality using Pontryagin’s
Minimum Principle.
¿
(c) Express the optimal control lawu (t) in terms of the costate variable.
(d) State the full boundary-value problem (BVP) consisting of the state
and costate equations and boundary conditions.
Q. Consider a nonlinear dynamical system governed by the following
equations:
3
d x 1 /dt=−u , x 1 (0)=0

d x 2 /dt=x1 +u , x 2 (0)=0

u(t )≥ 0 , for all t ∈[0 ,1]

Let the admissible control u(t)∈ U, with:

 Case (a): u(t )≥ 0, unrestricted above,


 Case (b): u(t )∈[0 , 0.1]

The time horizon is fixed over t ∈[0,1]. The objective is to maximize


the final value of x 2 at t=1 . J (u)=−x 2 (t f =1)

(a) Formulate the performance index clearly in integral form.


(b) Define the Hamiltonian H using Pontryagin’s Minimum
Principle.
(c) Derive the necessary conditions for optimality, including
the costate equations
(d)Determine the optimal control law u∗(t) in terms of the

(e) Repeat the problem if the control is constrained by u ∈ [0, 0.1].


costate variables.
Solution:
We define the Hamiltonian
H = λ1(−u^3) + λ2(x1 + u)
Costate Equations:
dλ1/dt = −∂H/∂x1 = −λ2
dλ2/dt = −∂H/∂x2 = 0 → λ2(t) = constant = −1 (from terminal cost φ
= −x2(1))
Integrating dλ1/dt = −λ2 = 1: → λ1(t) = ∫1 dt = t + C → λ1(t) = −t
+d
Using terminal condition λ1(1) = 0: −1 + d = 0 → d = 1
Therefore: λ1(t) = −t + 1=1-t
Minimize H with respect to u: H = −(x 2 + u) − (t − 1)u^3
Taking derivative w.r.t u and setting to zero: dH/du = −1 − 3(t −
1)u^2 = 0
Solving gives:


u¿ (t)= 2
1
3(1−t )
> 0.1 for most t

Since u*(t) ≥ 1/3 for all t ∈ [0,1], and if u ∈ [0, 0.1], then constraint
is active:
ub*(t) = 0.1 for all t ∈ [0,1]

Q.4- Consider a batch reactor in which two chemical reactions occur


in series as follows:
The objective is to maximize the production of compound B
after 2 hours by adjusting the temperature profile u(t), which affects
the reaction rates governed by the Arrhenius law.

The state variables are:

 x1(t): concentration of species A


 x2(t): concentration of species B

The nonlinear dynamic system is described by the following


differential equations:
−6
d x1
=−2 e u u x 1 starting ¿ x1 ( t=0 ) =0. 9
dt
−6 −11
d x2
=2 e x 1−5 e u x 2 , starting ¿ x 2 (t=0 )=0 .
u
dt

with initial conditions: x 1 (0)=0.9 , x 2 (0)=0.1

This is a free-endpoint optimal control problem with a


performance index aimed at maximizing the amount of B at the final
time tf =2.
tf =2
J ( u )=¿ x 2 ( tf =2 )+ ∫ 0 dt
0

(a) Formulate the performance index clearly in integral


form.
(b) Define the Hamiltonian H using Pontryagin’s Minimum
Principle.
(c) Derive the necessary conditions for optimality,
including the costate equations.
(d) Determine the optimal control law u∗(t) in terms of the
costate variables.
(e) State the complete boundary value problem (BVP),
including both state and costate dynamics with boundary
conditions.
(f) Discuss a suitable numerical method (e.g., shooting
method, direct collocation) to solve the resulting BVP.

Solution :-
Q.5- Suppose we wish to control the continuous plant

x ˙=
[−23 −4
−1 ] 6
4
x + u , x (0)=¿

using an optimal controller of the form u = −K(t)x to drive the


process from some specified initial condition to zero. For this
application, we decide that the manipulated variable deviations are
more costly than the state deviations,

Q=
[ 10 01] , R=2 , S=[ 00 00 ]
Q.6 - A system with state feedback is depicted below in figure (1) find the
values of K1, K2 and K3 so that the system satisfied the following
performance requirements.
Settling time ≤ 0 .74 sec , Peak overshoot ≤ 9.5 % and Place the third
pole 0.92 times as far from the imaginary axis as the second-order
dominant pair.
solution
Solution
Q.9- Figure below shows the diagram of a magnetic-ball suspension
system. The objective of the system is to control the position of the
steel ball by adjusting the current in the electromagnet through the
input voltage e(t). The differential equations of the system are

Write the state equations of the system in the form of dx (t )/dt=A x(t)+B u(t)
and draw a state diagram for the system.

Q.10- The schematic diagram of Figure below represents a control


system whose purpose is to hold the level of the liquid in the tank at
a desired level. The liquid level is controlled by a float whose
position h(t) is monitored. The input signal of the open-loop system
is e(t). The system parameters and equations are as follows:
Motor resistance Ra = 10 ohm. Motor
inductance La = 0 H
Torque constant Ki = 10 oz-in./A Rotor inertia Jm = 0.005
oz-in.-sec2 Back-emf constant Kb = 0.0706 V/rad/sec Gear
ratio n = N1/N2 = 1/100 Load inertia JL = 10 oz-in.-sec
Load and motor friction = negligible
Amplifier gain Ka = 50 Area of tank A =
50 ft2
 e a (t)=Ra i a+ K b ω(t)
2
 T m (t)=K i i a (t)=(J m + n J L )d ω m (t) /dt
 θ c (t)=n θ m, ω m (t)=d θ m (t)/dt

The number of valves connected to the tank from the reservoir is N


= 10. All the valves have the same characteristics and are
controlled simultaneously by θc. The equations that govern the
volume of flow are as follows:
q i (t)=K 1 N θ c (t)K 1=8 ft 3/ sec−rad
q o (t)=K o h(t) K o =50 ft 2/sec
volume of tank 1
h(t)= = ∫ [ qi ( t )−qo (t) ] dt
area of tank A

(a) Define the state variables as x 1 (t)=h(t), x2 (t )=θ m(t) .∧x 3 (t)=θm˙(t) . Write the
state equations of the system in the form of
dx (t )/dt =A x(t)+B u(t). Draw a state diagram for the system.
(b) Find the characteristic equation and the Eigen values of the A matrix found in
part (a).
(c) Show that the open-loop system is completely controllable; that is the pair [A, B]
is controllable.
(d) For reasons of economy, only one of the three state variables is measured and
feedback for control purposes. The equation is y = C x, where C can be one of the
following forms: ( 1 ) - C = [ 1 0 0] (2) - C = [ o l 0 ] ( 3 ) - C = [ 0 0 1]
Determine which case (or cases) corresponds to a completely observable system.
Q.11- The schematic diagram in Fig. below shows a permanent-
magnet dc-motor-control system with a viscous-inertia damper. The
system can be used for the control of the printwheel of an electronic
word processor. A mechanical damper such as the viscous-inertia
type is sometimes used in practice as a simple and economical way
of stabilizing a control system. The damping effect is achieved by a
rotor suspended in a viscous fluid. The differential and algebraic
equations that describe the dynamics of the system are as follows:
(a) Let the state variables be defined as x1(t) = ωm(t) and x2(t) =
ωD(t). Write
the state equations for the open-loop system with e(t) as the input.
(Open loop
refers to the feedback path from ωm to e being open.)
(b) Draw the state diagram for the overall system using the state
equations
found in part (a) and e(t) = Ks[ωr(t) – ωm(t)].
(c) Derive the open-loop transfer function Ωm(s)/E(s) and the closed-
loop
.transfer function Ωm(s)/Ωr(s)

Q.12 – Use controllability and observbility matrices whether the system represented
by the flow graph shown in figure below completely controllable and completely
. observable

Q.13 Examine the stability of the unity feedback system having the nonlinear
2k
element on- of relay .The describing function is N = π ¿ ].Find the amplitude and
.frequency of the resulting limit cycle

For the elements shown in figure below. Prove that the describing functions

n1 4M
Where A is the amplitude of input . N= ( 2 θ 1−sin 2 θ 1 ) + cosθ 1
π πA
.signal
Q. 14 - Consider the system with differential equation
3
ë + K ė+ K 1 ė + e=0
Examine the stability by Lyapunov's method, given that K>0 and K1>0 and select
2 2
x 1+ x2 =Lyapunov's function

You might also like