Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
7 views8 pages

Nonlinear Control

The document covers key concepts in nonlinear control systems, including state feedback, the separation principle, and describing functions. It explains optimal control, controllability, and the design of state observers and feedback controllers, emphasizing the importance of controllability for pole placement. Additionally, it discusses the Kalman filter and its applications in estimating states of dynamic systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views8 pages

Nonlinear Control

The document covers key concepts in nonlinear control systems, including state feedback, the separation principle, and describing functions. It explains optimal control, controllability, and the design of state observers and feedback controllers, emphasizing the importance of controllability for pole placement. Additionally, it discusses the Kalman filter and its applications in estimating states of dynamic systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Nonlinear Control subject (Code: 91882):

PART A — (10 × 2 = 20 marks)

1. What is the effect of state feedback in a control system?

State feedback modifies the system dynamics by placing the closed-loop poles at desired locations. It
improves system performance like stability, transient response, and speed by using feedback from all
state variables.

2. What is the separation principle in control system design?

The separation principle allows independent design of the state estimator (observer) and the state
feedback controller. It ensures that the overall closed-loop system remains stable if both the
observer and controller are designed independently to be stable.

3. List the incidental type of nonlinearities.

Incidental nonlinearities are unintentional and often arise from system components. Examples
include:

 Saturation
 Dead-zone
 Hysteresis
 Backlash
 Coulomb friction

4. Define Phase Plane.

The phase plane is a graphical representation where system variables (usually position and velocity
or their derivatives) are plotted against each other to study system behavior and trajectories.

5. Write describing function for Backlash.

Describing function for backlash (approximate):

N(A) = \frac{4M}{\pi A} \cos(\alpha)

6. Define describing function.

A describing function is an approximate method for analyzing nonlinear systems in the frequency
domain. It provides a gain and phase relationship as a function of input amplitude for certain
nonlinearities.

7. What is optimal control?

Optimal control is a control strategy that determines the control input to a system that minimizes (or
maximizes) a performance index or cost function over time, such as fuel, energy, or time.

8. Differentiate between time-varying and steady-state optimal control.

Time-varying: Gains or parameters change with time; more complex but accurate.

Steady-state: Gains are constant; simpler and used when the system has reached equilibrium or
when dynamics are stable.
9. What is the duality principle in optimal estimation?

The duality principle states that the design of an optimal estimator (like the Kalman filter) is
mathematically dual to the optimal regulator problem, with the roles of input and output
interchanged.

10. What is the Extended Kalman filter?

The Extended Kalman Filter (EKF) is a nonlinear version of the Kalman filter where the system is
linearized around the current estimate using Jacobians. It is used for estimating the states of
nonlinear systems.

PART B – Q11(a) (13 marks)

11(a)Explain the necessary and sufficient condition for arbitrary pole placement. Why is
controllability crucial in state feedback design? Explain with suitable example.

Pole Placement Condition:

The necessary and sufficient condition for arbitrary pole placement using state feedback is
that the system must be controllable.

Given:{x} = Ax + Bu

Using state feedback: u = -Kx= {x} = (A - BK)x

To assign desired eigenvalues (poles), the matrix A-BK must be freely adjustable — which is only
possible if the pair is controllable.

Controllability:

A system is controllable if it is possible to move the system state from any initial state to any
final state in finite time using a suitable control input.

Mathematically, controllability is verified using the Controllability Matrix:

\mathcal{C} = [B ; AB; A^2B \; A^{n-1}B]

If , the system is controllable.

Example:

Consider the system:

A = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \quad

B = \begin{bmatrix} 0 \\ 1 \end{bmatrix}

Controllability matrix:

\mathcal{C} = \begin{bmatrix} B & AB \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}

\Rightarrow \text{rank}(\mathcal{C}) = 2

Hence, controllable.

Choose desired poles . The desired characteristic equation is: s^2 + 5s + 6 = 0


Design feedback gain such that:

A - BK = \begin{bmatrix} 0 & 1 \\ -k_1 & -k_2 \end{bmatrix} \Rightarrow \text{Characteristic


equation: } s^2 + k_2s + k_1

Equating: k_2 = 5, \quad k_1 = 6 \Rightarrow K = [6 \quad 5]

Thus, poles are placed as required.

11(b) – Separation Principle

The separation principle in control theory states that the design of a state observer (estimator)
and the state feedback controller can be done independently, and their combination will result in a
stable closed-loop system if both are stable individually.

System Description:

For a linear system: \dot{x} = Ax + Bu, \quad y = Cx

State observer (Luenberger observer): \dot{\hat{x}} = A\hat{x} + Bu + L(y - \hat{y}) = A\hat{x} + Bu +


L(Cx - C\hat{x})

Observer error dynamics:

Control dynamics:

Here, is designed for desired control performance and for observer speed. These two
matrices can be designed independently.

12(a) – Singular Points in Nonlinear Systems

A singular point is a point where: \dot{x} = 0 \quad \text{and} \quad \dot{y} = 0

It is a critical point where the system state does not change — used in phase-plane analysis.

Types of Singular Points:

1. Node:

Real and equal eigenvalues All trajectories converge (stable) or diverge (unstable)

2. Saddle Point:

Real and opposite-sign eigenvalues Trajectories approach along one direction and diverge along
another Always unstable

3. Focus/Spiral:

Complex eigenvalues Trajectories spiral inward (stable focus) or outward (unstable focus)

4. Center:

Purely imaginary eigenvalues Circular or elliptical closed trajectories Neutrally stable

12(b) – Phase Plane Trajectories (Relay System)

Initial conditions:

Relay characteristics: output


Equation:

\ddot{y} = u, \quad u = \pm 1

\Rightarrow \text{Second-order system with piecewise constant input}

For : \ddot{y} = 1 \Rightarrow \dot{y} = t + C_1, \quad y = \frac{1}{2}t^2 + C_1t + C_2

For : \ddot{y} = -1 \Rightarrow \dot{y} = -t + C_1, \quad y = -\frac{1}{2}t^2 + C_1t + C_2

Construct trajectories in the vs plane for each mode.

Switching Condition: occurs when Using this, alternate between segments to sketch the
phase plane.

13(a) – Describing Function of Dead Zone

Dead Zone Nonlinearity:

Output = 0 if input

Output = linear if

Describing Function:

For sinusoidal input :

If the exact formula depends on system but shows that the nonlinearity reduces gain and introduces
phase lag.

Sketch: Input vs output plot: flat in center, linear outside

Describing function plot: starts at zero, increases with input amplitude

13(b) – Describing Function Analysis (Example)

System with relay nonlinearity:

Block diagram: Nonlinearity in feedback loop

Relay with amplitude :

Describing function:

Analysis:

Apply describing function method:

1. Find open-loop transfer function

2. Solve

3. Determine limit cycle amplitude and frequency

Example: Relay + second-order system → sustained oscillation (limit cycle)

14(a) – LQR Approach (Linear Quadratic Regulator)

\dot{x} = Ax + Bu

J = \int_{0}^{\infty} (x^T Q x + u^T R u) dt


Solution:

Optimal control:

Gain matrix:

satisfies Algebraic Riccati Equation:

A^T P + P A - P B R^{-1} B^T P + Q = 0

Diagram:

State feedback system

Closed-loop poles placed by LQR to minimize

Improved performance over arbitrary pole placement

14(b) – Time-Varying Optimal Control

Cost function:

J = \int_{0}^{t_f} x^T(t)Q(t)x(t) + u^T(t)R(t)u(t)\,dt

Approach:

Use time-varying Riccati differential equation:

-\dot{P}(t) = A^T P + P A - P B R^{-1} B^T P + Q

Control Law:

15 (a) – Kalman Filter for Discrete-Time Systems

The Kalman filter is an optimal estimator that recursively estimates the state of a discrete-
time linear dynamic system from noisy measurements.

System Model:

State equation:

x_{k+1} = A x_k + B u_k + w_k

y_k = C x_k + v_k

Where: is the state vector is the control input is the measured output is process noise (zero-mean,
white, covariance ) is measurement noise (zero-mean, white, covariance )

Kalman Filter Algorithm (Two Steps)

1. Prediction Step

Predicted state:

\hat{x}{k|k-1} = A \hat{x}{k-1|k-1} + B u_{k-1}

P_{k|k-1} = A P_{k-1|k-1} A^T + Q

2. Update (Correction) Step

Kalman gain:
K_k = P_{k|k-1} C^T (C P_{k|k-1} C^T + R)^{-1}

\hat{x}{k|k} = \hat{x}{k|k-1} + K_k (y_k - C \hat{x}_{k|k-1})

P_{k|k} = (I - K_k C) P_{k|k-1}

Key Features:

Works recursively; no need to store all past data. Minimizes the mean square error.Widely used in
control, navigation, and signal processing.

Applications:

 GPS tracking
 Robotics (position and velocity estimation)
 Financial systems

15 (b) – Kalman-Bucy Filter for Continuous-Time Systems

The Kalman-Bucy filter is the continuous-time counterpart of the discrete Kalman filter.

System Model:

State equation:

\dot{x}(t) = A x(t) + B u(t) + w(t)

y(t) = C x(t) + v(t)

Where: : process noise (white Gaussian, covariance ) : measurement noise (white Gaussian,
covariance )

Kalman-Bucy Filter Equations:

State estimate:

\frac{d}{dt} \hat{x}(t) = A \hat{x}(t) + B u(t) + K(t) [y(t) - C \hat{x}(t)]

Kalman gain:

K(t) = P(t) C^T R^{-1}

Covariance update: \frac{d}{dt} P(t) = A P(t) + P(t) A^T + Q - P(t) C^T R^{-1} C P(t)

Properties:

 Real-time continuous estimation


 Assumes continuous measurements and dynamics
 Uses a matrix Riccati differential equation for covariance update

Applications:

 Radar tracking
 Continuous control in aircraft and autonomous vehicles
 Continuous monitoring systems

Part C –

16(a): Derive the Optimal Control Law and Explain Use of the Algebraic Riccati Equation
System Description:

Consider the linear time-invariant system: \dot{x}(t) = A x(t) + B u(t)

Where:

is the state vector

is the control input

🎯 Objective:

Design a control law that minimizes the quadratic cost function:

J = \int_0^\infty \left[ x^T(t) Q x(t) + u^T(t) R u(t) \right] dt

Where:

is the state weighting matrix

is the control weighting matrix

🧠 Optimal Control Law:

The optimal control law is a state feedback law of the form:

u(t) = -Kx(t)

Where:

is a symmetric positive definite matrix that satisfies the Algebraic Riccati Equation (ARE)

📐 Algebraic Riccati Equation (ARE):

A^T P + P A - P B R^{-1} B^T P + Q = 0

This is a nonlinear matrix equation in . Once is computed, we can find the optimal gain .

✍ Derivation Sketch (Hamilton–Jacobi–Bellman Approach):

1. Define the value function (cost-to-go):

V(x) = x^T P x

2. Compute the Hamiltonian:

H = x^T Q x + u^T R u + \frac{\partial V}{\partial x} (Ax + Bu)

3. Set derivative of with respect to to zero:

\frac{\partial H}{\partial u} = 2 R u + 2 B^T P x = 0

\Rightarrow u = -R^{-1} B^T P x

4. Substitute into HJB equation and compare coefficients → leads to ARE.

Optimal feedback:

Gain matrix is the solution of:

A^T P + P A - P B R^{-1} B^T P + Q = 0


LQR (Linear Quadratic Regulator) gives an optimal trade-off between control effort and system
performance. are ensures closed-loop stability if is controllable and is observable. Used in
aerospace, robotics, and any linear system requiring optimal control.

16(b) – Pole Placement Design Technique:

Objective:

Design a state feedback controller:

u = -Kx

\dot{x} = (A - BK)x

Key Concept:

You can assign arbitrary poles to the closed-loop system if and only if the system is controllable.

🧠 Steps in Pole Placement Design:

🔹 Step 1: Check Controllability

Compute the Controllability Matrix:

\mathcal{C} = [B \;\; AB \;\; A^2B \;\; \cdots \;\; A^{n-1}B]

If: \text{rank}(\mathcal{C}) = n

🔹 Step 2: Determine Desired Characteristic Equation

Based on performance specs (e.g., damping, overshoot), select desired poles . Form desired
characteristic polynomial:

p_{des}(s) = (s - \lambda_1)(s - \lambda_2)...(s - \lambda_n)

🔹 Step 3: Calculate Feedback Gain

Use methods like: Ackermann’s formula (for single-input systems) Transformation to controllable
canonical form (for manual design)

📘 Derivation Using Ackermann's Formula (Single-Input System)

\dot{x} = Ax + Bu

If system is controllable, gain matrix:

K = [0 \; 0 \; \cdots \; 1] \cdot \mathcal{C}^{-1} \cdot p_{des}(A)

Where:

is the desired characteristic polynomial evaluated at

Example: Step-by-Step Pole Placement

Given System:

A = \begin{bmatrix} 0 & 1 \\ -2 & -3 \end{bmatrix}, \quad B = \begin{bmatrix} 0 \\ 1 \end{bmatrix}

You might also like