Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
39 views10 pages

Chapter 5

Chapter 5 discusses eigenvalue problems, defining the relationship between matrices, eigenvalues, and eigenvectors, and providing methods for finding them, such as the power method and inverse power method. It includes examples to illustrate the calculation of eigenvalues and eigenvectors, along with properties and convergence criteria. The chapter emphasizes the importance of eigenvalues in matrix analysis and iterative methods for solving linear equations.

Uploaded by

yesaman2011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views10 pages

Chapter 5

Chapter 5 discusses eigenvalue problems, defining the relationship between matrices, eigenvalues, and eigenvectors, and providing methods for finding them, such as the power method and inverse power method. It includes examples to illustrate the calculation of eigenvalues and eigenvectors, along with properties and convergence criteria. The chapter emphasizes the importance of eigenvalues in matrix analysis and iterative methods for solving linear equations.

Uploaded by

yesaman2011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 5: Eigen value problems

Consider the Eigen value problem


Ax=λx
Where ∈ R n ×n , λ ∈ R∨λ ∈C , is the Eigen value of A∧x ∈ R nis the eigen vector
corresponding to the eigen value λ .
The Eigen values of a matrix A are given by the roots of the characteristic
equation
|λI − A|=0.
Note: The Eigen value of a square matrix A can be zero but the Eigen vector
of a square matrix A never be zero.
Example

( )
0 0 −2
Given the matrix A= 1 2 1 then,
1 0 3
a. Find the Eigen values of A .
b. Find the Eigen vectors of A

| |
λ 0 2
λ−2 −1 =λ [ ( λ−2 ) ( λ−3 ) ] + 2 ( λ−2 ) =( λ−2 ) ( λ −3 λ+ 2 )
2
det( λI − A )= −1
−1 0 λ−3

a. det ( λI − A ) =0 ⇒ ( λ−2 ) ( λ2−3 λ+ 2 )=0 ⇒ λ1=2 , λ2=2,∧λ3 =1


For λ=2

()
x
b. Let the corresponding vector be y
z

| |( ) ( ) | |( ) ( )
λ 0 2 x 0 2 0 2 x 0
−1 λ−2 −1 y = 0 ⇒ −1 0 −1 y = 0 ⇒ 2 x+2 z=0 ⇒ x=−z
−1 0 λ−3 z 0 −1 0 −1 z 0
Let x=s∧ y=t

() ( ) ( )() ( ) ()
x s s 0 1 0
y = t = 0 + t =s 0 +t 1 ,t , s ∈ R
z −s −s 0 −1 0

( ) ()
1 0
0 ∧ 1 are the corresponding eigen vectors w.r.t λ=2
−1 0
For λ=1

Page 1 of 10
()
x
Let the corresponding eigen vector be y
z

| |( ) ( ) | |( ) ( )
λ 0 2 x 0 1 0 2 x 0
−1 λ−2 −1 y = 0 ⇒ −1 −1 −1 y = 0
−1 0 λ−3 z 0 −1 0 −2 z 0
⇒ x +2 z=0 ⇒ x=−2 z∧−x− y−z=0 but x=−2 z ⇒−(−2 z ) − y−z=0 ⇒ y=z

()( ) ( )
x −2 z −2
y = z =z 1 , z ∈ R
z z 1

()
−2
1 is the corresponding eigen vector For λ=1
1

Properties of Eigen values


1. The sum of the Eigen values of a matrix is the sum of the elements of the principal
(main) diagonal.
2. The product of the eigen values of a square matrix A is equals to the determinant of A .
1
3. If λ is an eigen value of a square matrix A then is the eigen value of the inverse of
λ
−1
matrix A( A )
4. If λ 1 , λ2 , … , λn are the Eigen value of a square matrix A then Am has the Eigen values
m m m
λ 1 , λ 2 , … , λ n where m is positive integer.
Methods of finding Eigen values
There are several methods for finding the Eigen values of a general
matrix or a symmetric matrix.
Among those method we will see only the power method and the
Householder's method for finding the Eigen value of a matrix including the
corresponding Eigen vector.
1. power method

The method for finding the largest Eigen value in magnitude and the
corresponding Eigen vector of the Eigen value problem Ax=λx is called the
power method.

What is the importance of this method? The necessary and sufficient


condition for convergence of the
Gauss-Jacobi and Gauss-Seidel iteration methods is that the spectral radius
of the iteration matrix H is less than one unit, that is, ρ(H )<1 , where ρ(H )is

Page 2 of 10
the largest eigen value in magnitude of H( ρ ( H )=maxi=1| λi|). If we write the
n

matrix formulations of the methods, then we know H.


We can now find the largest Eigen value in magnitude of H, which
determines whether the methods converge or not.
We assume that λ 1 , λ2 , … , λn are distinct eigen values such that |λ 1|>|λ 2|>…>| λn|.
Consider Ax=b .
We have to splitting matrix A as A=D+ L+ U , where

[ ]
a11 ⋯ a 1n
A= ⋮ ⋱ ⋮
a n1 ⋯ ann

[ ]
a11 ⋯ 0
D= ⋮ ⋱ ⋮ → diagonal ¿
0 ⋯ ann

[ ]
0 ⋯ 0
L= ⋮ ⋱ ⋮ →lower triangular ¿
an 1 ⋯ 0

[ ]
0 ⋯ a 1n
U = ⋮ ⋱ ⋮ → upper triangular ¿
0 ⋯ 0
( n+1 ) −1 ( n) −1
Ax=b ⟹ ( D+ L+U ) x=b ⟹ Dx=−( L+U ) x+ b ⟹ x =−D ( L+U ) x + D b
−1
⟹ x (n+1 )=H x( n) +C , where H=− D ( L+U ) and C=D−1 b
Hence H=− D−1 ( L+U ) is called Jacobi’s iteration matrix.
Also from Ax=b ⟹ ( D+ L+U ) x=b ⟹ ( D+ L ) x=−Ux+ b ⟹ x ( n+1 ) =−( D+ L )−1 U x ( n ) + ( D+ L )−1 b
( n+1 ) ( n) −1 −1
⟹ x =H x +C , where H=− ( D+ L ) U and C=( D+ L ) b
Hence H=− ( D+ L )−1 U is called Gauss-Seidel iteration matrix.
Let v 1 , v 2 , ..., v n be the Eigen vectors corresponding to the Eigen values
λ 1 , λ2 , … , λn respectively.
The method is applicable if a complete system of n linearly independent
Eigen vectors exist, even though some of the Eigen values λ 1 , λ2 , … , λn may
not be distinct. The n linearly independent Eigen vectors form an n-
dimensional vector space. Any vector v in this space of Eigen vectors
v 1 , v 2 , ..., v ncan be written as a linear combination of these vectors. That is
V =c 1 v 1 +c 2 v 2 +…+ c n v n.
Since Ax=λx we have:
A v 1 =λ1 v 1 , A v 2=λ2 v 2 , … , A v n= λn v n
⟹ AV =c 1 λ1 v 1+ c 2 λ 2 v 2+ …+c n λn v n= λ1 ¿
Now finding A2 V , A 3 V , … , A k V , A k+1 V

Page 3 of 10
( ( ) ( ) )
2 2
2 2 λ2 λ
We have A V =λ 1 c 1 v 1+ c 2 v 2+ …+c n n v n .
λ1 λ1

( ( ) ( ) )
3 3
λ λ
A V = λ c 1 v 1 +c 2 2 v 2 +…+ cn n v n .
3 3
1
λ1 λ1

⋯ ⋯ ⋯

( ( ) ( ) )
k k
λ λ
A V = λ c1 v 1 +c 2 2 v 2+ …+c n n v n .
k k
1
λ1 λ1

( () ( ) )
k +1 k+1
λ2 λn
A k+1 V =λ1k+1 c 1 v 1 +c 2 v 2+ …+c n vn .
λ1 λ1

( )
k+1
λn λn
Since |λ 1|>|λ 2|>…>| λn|, −1< <1⟹ →0 , as k →∞
λ1 λ1
Therefore A k V → λ k1 c 1 v1 ∧A k+ 1 V → λ k+1
1 c 1 v 1 as k → ∞
k+1 k +1
A V λ1 c 1 v 1
lim k
= k =λ1
k→∞ A V λ 1 c1 v 1

k+1
A Vr
λ 1=lim k
, r =1, 2 , 3 , … ,n
k →∞ A Vr
Where the suffix r denotes the r th component of the vector.
Therefore, we obtain n ratios, all of them tending to the same value, which is
the largest Eigen value in magnitude,|λ 1|.
When do we stop the iteration?
The iterations are stopped when all the magnitudes of the differences of the
ratios are less than the given error tolerance.
Remark: The choice of the initial approximation vector v 0 is important. If no
suitable approximation is available
We can choose v 0 with all its components as one unit, that is, v 0=( 1 ,1 , 1 , … ,1 )T
.
However, this initial approximation to the vector should be non-orthogonal
to v 1.
Remark: Faster convergence is obtained when |λ 2|≪ |λ 1|.
As k → ∞, premultiplication each time by A, may introduce round-off errors. In
order to
keep the round-off errors under control, we normalize the vector before
premultiplying by A.
The normalization that we use is to make the largest element in magnitude
as unity. If we use

Page 4 of 10
this normalization, a simple algorithm for the power method can be written
as follows.
y k +1=A v k
y k+1
v k+1=
mk+1
Where mk+ 1is the largest element in magnitude of y k +1. Now, the largest
element in magnitude of v k+1is one unit.
k+1
A Vr ( y k+1 )r
Then λ 1=lim k can be written as λ 1=lim , r=1 , 2 ,3 ,… , n and v k+1is the
k →∞ A V
r k →∞ ( v k )
r
required Eigen vector.
Remark: It may be noted that as k → ∞ , mk+ 1also gives |λ 1| .

Inverse Power Method


Inverse power method can give approximation to any eigenvalue. However,
it is used usually to find the smallest eigenvalue in magnitude and the
corresponding eigenvector of a given matrix A.

The eigenvectors are computed very accurately by this method. Further, the
method is powerful to calculate accurately the eigenvectors, when the
eigenvalues are not well separated.

In this case, power method converges very slowly.


If λ is an eigenvalue of A , then 1/ λ is an eigenvalue of A−1 corresponding to
the same eigenvector.
The smallest eigenvalue in magnitude of A is the largest eigenvalue in
magnitude of A−1.
Choose an arbitrary vector y 0(non-orthogonal to x). Applying the power
method on A−1.

Example: Find the largest eigen values in modulus and the corresponding
eigen vector of the matrix.
a. A= ( )
1 2
3 4

( )
25 1 2
b. A= 1 3 0
2 0 −4

Solution : Let the initial approximation to the eigen vector be v 0. Then


the power method is given by

Page 5 of 10
{
y k+ 1=A v k
y k+1
v k +1=
mk+1
Where mk+ 1 is the largest element in magnitude of y k +1. The dominant
eigen value in magnitude is given by
( y k+1 )r
λ 1=lim , r=1 , 2, 3 , … , n and v k+1 is the required eigen vector.
k →∞ ( v k )
r
T
Let v 0=[ 1 ,1 ] . We have the following results.

[] [][ ]
3 y1 1 3 0.42857
y 1= A v 0= , m1=7 , v 1= = =
7 m1 7 7 1

[ ] [ ][ ]
2.42857 y2 1 2.42857 0.45946
y 2= A v 1= ,m2=5.28571 , v 2= = =
5.28571 m2 5.28571 5.28571 1

[ ] [ ][ ]
2.459946 y3 1 2.459946 0.45729
y 3= A v 2= , m3=5.37838 , v 3= = =
5.37838 m3 5.37838 5.37838 1

[ ] [ ][ ]
2.45729 y4 1 2.45729 0.45744
y 4 = A v 3= , m4=5.37187 , v 4 = = =
5.37187 m4 5.37187 5.37187 1

[ ] [ ][ ]
2.45744 y5 1 2.45744 0.45743
y 5= A v 4= , m5=5.37232 , v 5= = =
5.37232 m5 5.37232 5.37232 1
y 6= A v 5=
[2.45743
5.37229 ]
Now, we find the ratios
( y k+1 )r
λ 1=lim , r=1 , 2(where r isthe components of y k+1∧v k )
k →∞ ( v k )
r
We obtain the ratio as
2.45743 5.37229
For r =1, =5.37225 ,forr =2 , =5.37229
0.45743 1
The magnitude of the error between the ratios is
|5.37225−5.37229|=0.00004<0.00005 . Hence, the dominant Eigen value, correct to
four decimal places is 5.37225

Householder Algorithm

Page 6 of 10
In Householder’s method, A is reduced to the tridiagonal form by orthogonal
transformations representing reflections. This reduction is done in exactly
n – 2 transformations.
The transformations are of the form
T n T T 2 2 2
P=I −2W W where W ∈ R , such that W =[ x 1 , x 2 , … , x n ] and W W =x 1 + x 2+ …+ x n=1.
P is symmetric and orthogonal. The vectors W are constructed with the first
(r −1) components as zero, that is
T 2 2 2
W r =(0 , 0 , … ,0 , x r , x r+1 , … , x n ) with x r + xr +1 +…+ x n =1. With this choice of W r , form
the matrices
T
Pr =I −2W r W r . The similarity transformation is given by
−c 1 T
Pr A P r=Pr A Pr =Pr A Pr . Put A=A 1 and form successively
Ar =Pr Ar −1 Pr , r =2 ,3 , … , n−1
At the first transformation, we find xr ’ s such that we get zeros in the
positions (1 , 3),(1 , 4 ), ... ,(1 , n) and in the
Corresponding positions in the first column.
In the second transformation, we find xr ’ s such that we get zeros in the
positions (2 , 4),(2 , 5), ... ,(2 , n) and in the
Corresponding positions in the second column. In (n – 2) transformations, A is
reduced to the tridiagonal form.
Note:
1. A square matrix of order n is said to be orthogonal matrix if and only if
t t
A A =I = A A
2. An n × nmatrix A is called tridiagonal matrix if a ij=0whenever |i− j|>1 .
3.
For example, consider

[ ]
a11 a12 a13 a14
a a a a
A= 21 22 23 24
a 31 a32 a33 a34
a 41 a42 a43 a44
For the first transformation, choose
T T 2 2 2
W 2 =[0 , x 2 , x 3 , x 4 ] ⟹ x 2 + x 3 + x 4=1
We find S1= √ a 212+ a213+ a214
2
x 2=
1
2 (1+
a 12 sign ( a 12)
S1 )
a13 sign ( a12 )
x 3=
2 S1 x2
a 14 sign ( a12 )
x4 =
2 S1 x 2
This transformation produces two zeros in the first row and first column.
One more transformation produces zeros in the (2 , 4) and (4 , 2) positions.

Page 7 of 10
Example
Find all the eigenvalues of the matrix

[ ]
1 2 −1
A= 2 1 2 Using the Householder method.
−1 2 1

Solution
Choose W T2 =[0 , x 2 , x 3 ] such that x 22+ x23 =1. The parameters in the first House-
holder transformation are obtained as follows:

S1= √ a 12+ a13=√ 2 + (−1 ) =√5 ,


2 2 2 2

2 1
( a
x 2= 1+ 12 sign(a 12) = 1+
2 S1
1
2
2
=
) (
2+ √ 5
√5 2 √ 5 )
⟹ x 2=
2+ √ 5
2 √5 √
a
x 3= 13 sign ( a12 ) =
2 S1 x2
−1
=
−1 √ 2 √ 5
2 S 1 x 2 2 √ 5 √ 2+ √ 5 ( )
[ ]
1 0 0
T
P2=I 3−2 W 2 W = 0 −2/ √ 5 1/ √ 5
2
0 1/ √ 5 2 / √5
The required Householder transformation is

[ ]
1 −√ 5 0
A2=P 2 A1 P2= − √5 −3/5 −6/5 where A1= A
0 −6/5 13/5
To find the eigen value of A
λ−1 √5
Let f n=det ( λI − A n) and det ( λI − A2 ) = √ 5 λ +3/5

f 0=1
0 6 /5
| 0
6 /5
|
λ−13 /5

f 1=λ−1

|
f 2=
λ−1
5
5
λ+3/5
2
=λ − λ−
2
5 |28
5

|
f 3=

λ
λ−1 √5
√ 5 λ+3/5
0 6 /5
f0
0

λ−13/5
f1 f2 f3
|
6/5 =λ −3 λ −6 λ+ 16=( λ−2 ) ( λ −λ−8 )
3 2 2

−3 +¿ −¿ +¿ −¿
−2 +¿ −¿ −¿ +¿
Page 8 of 10
−1 +¿ −¿ −¿ +¿
0 +¿ −¿ −¿ +¿
1 +¿ 0 −¿ +¿
2 +¿ +¿ −¿ 0
3 +¿ +¿ +¿ −¿
4 +¿ +¿ +¿ +¿
Since f 3=0 , for λ=2, λ=2 is the eigen value of matrix A . The other two eigen
values lies in the interval (−3 ,−2 ) and ( 3 , 4 ). So we can find the other two
eigen values using Newton-Raphson method. That is
f 3( λk ) 3 2
λk −3 λk −6 λk +16
λ k+1= λk − ' =λ k − 2 for k =0 , 1 ,2 , …
f 3( λk ) 3 λk −6 λk −6
Consider the interval ( 3 , 4 )

3+4
λ 0= =3.5
2
After a certain iteration we get λ=3.372 .
Consider the interval (−3 ,−2 )
−3−2
λ 0= =−2.5
2
After a certain iteration we get λ=– 2.372 .

Example
Use the House-holder’s method to reduce the given matrix A into the
tridiagonal form

[ ]
4 −1 −2 2
−1 4 −1 −2
A=
−2 −1 4 −1
2 −2 −1 4
Solution
Let W 2 =[0 , x2 , x3 , x 4 ]T where x 22+ x23 + x 24 =1
S1= √ a 12+ a13+ a14= √ (−1 ) + (−2 ) +2 =3
2 2 2 2 2 2

2 1
x 2= 1+ 12
2 (
a sign(a12) 1
S1 ) (
= 1+
2
(−1)(−1) 2
3
=)3

x 2=
2
3√
a 13 1
x 3= sign ( a12 ) =
2 S1 x2 √6
a14 −1
x4 = sign ( a12 )=
2 S1 x2 √6

Page 9 of 10
[ ]
1 0 0 0
0 −1 /3 −2/3 2/3
T
So P2=I 4 −2W 2 W = 2
0 −2 /3 2/3 1/3
0 2 /3 1/3 2/3
A2=P 2 A1 P2 where A1= A

[ ]
4 3 0 0
3 16 /3 2/3 1/3
A 2=
0 2 /3 16 /3 −1/3
0 1 /3 −1/3 4/3

T 2 2
W 3 =[0 , 0 , x 3 , x 4 ] where x 3 + x 4=1

S2= √ a 223+ a224= √ ( 2/3 ) + ( 1/3 ) = √


2 2 5
3
1
(
x 23= 1+ 23
2
a sign(a23 ) 1
S2 ) (
= 1+
2
2/3
√ 5/3 )
⟹ x3 = √
2+ 5
2 √5

x 4 = 24 sign ( a23 )= √
a 2 √5
2 S2 x3 2 √ 2+ √ 5

( )
T

W 3 = 0 ,0 , √ , √ √
2+ 5 2 5
2 √ 5 2 √ 2+ √ 5
T
P3=I 4 −2W 3 W 3
The required Householder transformation is

[ ]
4 3 0 0
−5
3 16/3 0
A3 =P 3 A 2 P3= 3 √5
0 −5/3 √ 5 16 /3 9 /5
0 0 9/5 12/5

Page 10 of 10

You might also like