Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
64 views6 pages

Dynamics Systems State Space Control

In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake.

Uploaded by

Gilbert Sigala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views6 pages

Dynamics Systems State Space Control

In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake.

Uploaded by

Gilbert Sigala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Dynamic Systems - State Space Control

- Lecture 3 ∗

September 11, 2012

Recall some basic facts about finite dimensional vector spaces.

Let v be a vector space whose field of scalars is R or C.

There are two laws of combination,

1 Vector Addition
• For v1 , v2 , v3 ∈ V, the laws of vector addition are associative;

(v1 + v2 ) + v3 = v1 + (v2 + v3 )

• For v1 , v2 ∈ V, the laws of vector addition are commutative;

v1 + v2 = v2 + v1

• There exists a zero vector 0 ∈ V, such that;

v + 0 = v for all v ∈ V

• For every v ∈ V, −v ∈ V as well.

∗ This work is being done by various members of the class of 2012

1
Background on Linear Algebra 2

2 Scalar Multiplication
• For any α ∈ R(C) and v ∈ V,
αv ∈ V

• For α, β ∈ R(C)

α(βv) = (αβ)v (scalar multiplication is associative)

• For α ∈ R(C) and v, w ∈ V,

α(v + w) = αv + αw (Distributive over vector spaces)

• For α, β ∈ R(C), v inV

(α + β)v = αv + βv (Distributive over scalar addition)

A basis for a vector space V is a linearly independent set of vectors: v1 , ..., vn


that spans V . Saying that v1 , ..., vn spans V means that for every w ∈ V there
are scalars α1 , ..., αn such that,

W = α1 v1 , ..., αn vn

1, s, s2 , ..., sn − These monomials are a basis of the vector space of polynomials


p(s) having degree ≤ n A vestor space V is finite dimensional of dimension n if
it has a basis with n elements.

A finite dimensional vector space over Rn (resp Cn ) is isomorphic to


Rn (resp Cn ).

THEOREM: Any square matrix with distinct eigenvalues can be put into
a diagonal form by a change of basis.

PROOF: List the n eigenvalues corresponding to distinct eigenvalues. Prove


this is a basis (DO THIS!). With respect to this basis, the matrix has diagonal
form.

Matrices whose eigenvalues are not distinct


Examples:

1.  
1 0
0 1
Background on Linear Algebra 3

2.  
λ 1
0 λ

Jordan Canonical Form:

Recall that the eigenvalues of a matrix A are the roots of the characteristic polynomial,

p(λ) = det(λI − A)

Suppose p(λ) can be factored

p(λ) = (λ − λ1 )r1 ...(λ − λp )rp

where λi 6= λj for i 6= j (to be sure that such factorisation exists, we must allow
for complex eigenvalues)

For each λj we are interested in the associated generalized eigenspace,

M k = ker(A − λj I)k = {v ∈ V : (A − λj I)k v = 0}

What are these?


M 0 = ker(A − λj I)0 = ker(I) = {0}

M 1 = ker(A − λj I) = eigenspace of λj

For v ∈ M k ,

(A − λj I)k+1 v = (A − λj I)(A − λj I)k v = (A − λj I)0 = 0

Therefore,M k ∈ M k+1 we have an increasing chain of subspaces M 0 ⊂ M 1 ⊂

Since M k ⊂ C for all k, there is some value t such that M k = M t for all
k ≥ t, let t be the smallest integer such that M k = M t for all k ≥ t (call
M t = Mλj )

Another (this time decreasing) subspace chain which is of interest is,

(A − λj I)k Cn = W k
Background on Linear Algebra 4

Here we have Cn = W 0 ⊂ W 1 ⊂ . . .

Let mλj = dim M(λj ) The dim W t = n − m(λj ) (Since the dimensions of the
range space and the null spce of (A − λj I)t must add up to r) We must have,

W k = W t for k ≥ t

And we denote W t by Wλj

Proposition 1: Cn is the direct sum of M(λj ) and W(λj ) (i.e any v ∈ Cn


maybe writen uniquely as W = vm + wm where vm ∈ M(λj ) , wm ∈ W(λj ) )

Proof: Since (A − λj I)W(λj ) = (A − λj I)t+1 Cn = (A − λj I)t Cn = Wλj


We see that A − λj I is nonsingular on W(λj ) . Let v be any vector in Cn . Then
(A − λj I)t v = 0 ∈ W(λj ) . Because (A − λj I)t is nonsingular on W(λj ) . There is
a unique wm = W(λj ) such that (A − λj I)t Wm = γ. Let vm = V − wm . It is
easily seen (A − λj I)t Vm = 0, so that Vm ∈ M(λj ) .

Hence Cn = M(λj ) + W(λj ) and remains only to show that the expression
W = vm + wm is unique. This follows since dim M(λj ) + dim W(λj ) = n, and
any pair of respective basis.

For M(λj ) and W(λj ) {v1 , . . . , vn }, {w1 , . . . , wn } together define a basis for Cn .
Cn = M(λj ) ⊕ W(λj )
Where, ⊕ is the notation for direct sum.
Background on Linear Algebra 5

Proposition 2: For any i, j, A − λI maps M(λj ) , W(λj ) onto themselves.

Proof: In the case i = j we’ve shown that (A − λj I)M k+1 ⊂ M k ⊂ M k+1 and
(A − λj I)W k = W k+1 ⊂ W k . This implies W(λj ) and M(λj ) are invarient
under (A − λi I)
But from this it follows that these subspaces are invariant under any
polynomial in A − λi I. In particular, under A − λi I = A − λi I + (λi − λj )I.
Proposition 3: For λi 6= λj , M(λi ) ⊂ W(λj )

Proof: Let v ∈ M(λj ) . I claim that v cannot be in kernal of A − λj I. For j 6= i


suppose to the contrary, V ∈ ker(A − λj I). Then,

(λj − λi )ti v = {A − λi I − (A − λj I)}ti v


ti  
X ti
ti
= (A − λi I) v + (−1)k
(A − λiI )ti −k (A − λjI )k v
k
k=1

= 0 (the first term = 0 because v ∈ M(λj ) ), the remaining terms would need to
be zero under the assumption v ∈ ker(A − λj I)

Since λi − λj 6= 0, this implies v = 0. This in turn implies (A − λj I)|Mλi (The


linear transformation restricted to subspace M(λi ) ) is non-singular. This
means that A − λj I maps M(λi ) onto it-
self and hence, M(λi ) is contained in the range of (A−λj I)tj which is just W(λj ) .

Proposition 4: Cn = M(λ1 ) ⊕ M(λ2 ) ⊕ · · · ⊕ M(λp )


Remark: The direct sum notation means that any vector v ∈ Cn can be
uniquely expressed as a sum, V = v1 + · · · + vp , vj ∈ M(λj )

Proof: Cn = M(λ1 ) ⊕ W(λ1 ) and since W(λ1 ) ⊃ M(λ2 ) ,


We can write,

Cn = M(λ1 ) ⊕ (W(λ1 ) ∩ [M(λ2 ) ⊕ M(λ1 ) ])


= M(λ1 ) ⊕ ([W(λ1 ) ∩ M(λ2 ) ] ⊕ (W(λ1 ) ∩ W(λ2 ) ))
= M(λ1 ) ⊕ M(λ2 ) ⊕ (W(λ1 ) ∩ W(λ2 ) )
Similiarly we obtain,

Cn = M(λ1 ) ⊕ M(λ2 ) ⊕ M(λ3 ) ⊕ (W(λ1 ) ∩ W(λ2 ) ∩ W(λ3 ) )

Eventually we obtain,

Cn = M(λ1 ) ⊕ M(λ2 ) · · · ⊕ M(λp ) ⊕ (W(λ1 ) ∩ · · · ∩ W(λp ) )


Background on Linear Algebra 6

To complete the proof, we show that

W(λ1 ) ∩ · · · ∩ W(λp ) = {0}

Since each W(λi ) is invarient under all A − λj I, W(λ1 ) ∩ · · · ∩ W(λp ) is invarient


as well.
A − λj I is non-singular on W(λ1 ) ∩ · · · ∩ W(λp ) .Otherwise it would be singular
on all W(λi ) s and W(λj ) in particular. Hence,

(A − λ1 I)r1 (A − λ2 I)r2 . . . (A − λp I)rp

is nonsingular on W(λ1 ) ∩ · · · ∩ W(λp ) . But since this operator maps all vectors
in Cn to zero, the only way it could be nonsingular on W(λ1 ) ∩ · · · ∩ W(λp ) is if
this intersection is the zero subspace.

You might also like