Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
25 views75 pages

ES 204 L2 Linear Equations

The document discusses solving systems of linear equations using matrix notation and Gaussian elimination. Gaussian elimination transforms the matrix of coefficients into an upper triangular matrix using row operations. The steps are to transform the matrix into upper triangular form using row operations, then back substitution to solve for the unknowns.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views75 pages

ES 204 L2 Linear Equations

The document discusses solving systems of linear equations using matrix notation and Gaussian elimination. Gaussian elimination transforms the matrix of coefficients into an upper triangular matrix using row operations. The steps are to transform the matrix into upper triangular form using row operations, then back substitution to solve for the unknowns.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

ACADEMIC INTEGRITY STATEMENT

As a student of the University of the Philippines, I pledge to act


ethically and uphold the value of honor and excellence.

I understand that suspected misconduct on given assignments,


examinations, and other course requirements will be reported to
the appropriate office and if established, will result in disciplinary
action in accordance with University rules, policies, and procedures.
I may work with others only to the extent allowed by the Instructor.
COPYRIGHT NOTICE

This material is intended for educational purposes only. This has


been reproduced and communicated to you by or on behalf of
University of the Philippines pursuant to PART IV: The Law on
Copyright of Republic Act (RA) 8293 or the “Intellectual Property
Code of the Philippines”.

The University does not authorize you to reproduce or


communicate this material. The Material may contain works that
are subject to copyright protection under RA 8293. Any
reproduction and/or communication of the material by you may be
subject to copyright infringement and the copyright owners have
the right to take legal action against such infringement.

Do not remove this notice.


This presentation may contain errors that will be corrected in class.
It is your responsibility to take note of these corrections. This may
be considered as a supplementary material only and should not be
used as substitute to the class discussions and reading materials.
Not everything that you need to know is included in these notes.
Linear Equations
Table of Contents
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 5 / 75
Linear Equations

ä The admission fee at a small fair is 1.50 for children and 4.00
for adults. On a certain day, 2200 people enter the fair and 5050 is
collected. How many children and how many adults attended?
Answer: 1500 children and 700 adults

ä The sum of the digits of a two-digit number is 7. When the


digits are reversed, the number is increased by 27. Find the
number.
Answer: 25

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 6 / 75


Linear Equations

ä Find the equation of the parabola that passes through the


points (-1, 9), (1, 5), and (2, 12).

y = Ax2 + Bx + C

9 = A(−1)2 + B(−1) + C
5 = A(1)2 + B(1) + C
12 = A(2)2 + B(2) + C
Answer: y = 3x2 − 2x + 4

ä Find the partial fraction decomposition of the following:


5x + 7 A B C
⇒ + +
x3 2
+ 2x − x − 2 x+2 x+1 x−1
Answer: [A, B, C] = [−1, −1, 2]

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 7 / 75


Linear Equations

Linear algebraic equation

a1 x1 + a2 x2 + . . . + an xn = b

System

a11 x1 + a12 x2 + · · · + a1j xj + · · · + a1n xn = b1


a21 x1 + a22 x2 + · · · + a2j xj + · · · + a2n xn = b2
..
.
ai1 x1 + ai2 x2 + · · · + aij xj + · · · + ain xn = bi
..
.
am1 x1 + am2 x2 + · · · + amj xj + · · · + amn xn = bm

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 8 / 75


Notation and Definitions

ä Matrix m rows and n columns


 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
A = [aij ] =  .
 
.. .. ..
 ..

. . . 
am1 am2 · · · amn
ä Row vector,

r = [rj ] = [r1 r2 · · · rn ]
ä Column vector,
 
c1
 c2 
c = [ci ] = 
 
.. 
 . 
cm
ä Recall basic matrix operations
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 9 / 75
Matrix Equation

ä Matrix Equation for Algebraic Equations


     
a11 a12 · · · a1n x1 b1
 a21 a22 · · · a2n   x2   b2 
..  ·  .. =
     
 .. .. .. .. 
 . . . .   .   . 
an1 an2 · · · ann xn bn
or in compact form
A·x=b

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 10 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 11 / 75
An Illustration

ä Determine ci of the parabola, y = c1 + c2 x + c3 x2 that passes


through points with (x, y) coordinates (2, 2), (3, 4), and (4, 7).
c1 + 2c2 + 4c3 = 2
c1 + 3c2 + 9c3 = 4
c1 + 4c2 + 16c3 = 7
In matrix form,
    
1 2 4 c1 2
 1 3 9   c2  =  4 
1 4 16 c3 7
Solving for ci
 
1.0 c1 = 1.0
ci =  −0.5  or c2 = −0.5
0.5 c3 = 0.5

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 12 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 13 / 75
Gaussian Elimination

ä Transform A · x = b into an equivalent system U · x = v where


U is an upper triangular matrix. That is all elements under the
diagonal of matrix U are ZERO.

ä Steps:
1 Transform the matrix of coefficients into upper triangular
matrix using row operations
2 Backsubstitution

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 14 / 75


Gaussian Elimination

ä Suppose we want to solve (Nonzero diagonal)


    
2 4 −2 x1 2
 4 9 −3   x2  =  8 
−2 −3 7 x3 10

ä Augmented matrix
   
2 4 −2 2 2 4 −2 2
 4 9 −3 8  ⇒  0 1 1 4  R2 = R2 − 42 R1
−2 −3 7 10 0 1 5 12 R3 = R3 − −2
2 R1
 
2 4 −2 2
⇒ 0
 1 1 4 
0 0 4 8 R3 = R3 − 11 R2

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 15 / 75


Gaussian Elimination

ä Equivalent matrix
    
2 4 −2 x1 2
 0 1 1   x2  =  4 
0 0 4 x3 8

ä Backsubstitution
1 Row 3: 4x3 = 8 =⇒ x3 = 8/4 = 2
2 Row 2: x2 + x3 = 4 =⇒ x2 = 4 − 2 = 2
3 Row 1:
2x1 + 4x2 − 2x3 = 2 =⇒ x1 = (2 − 4 × 2 + 2 × 2)/2 = −1
ä Solution    
x1 −1
 x2  =  2 
x3 2

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 16 / 75


Pivoting

ä Row pivoting - let row k be pivot row with pivot element akk
    
0 1 2 3 x1 −0.2
 0 1 4 12   x2   0.8 
 1 1 1 1   x3  =  1.5
    

1 2 4 8 x4 1.2

1 Search rows k through n for the element in column k with the


largest magnitude;
2 If m is different from k, exchange rows k and m, and
exchange bk and bm .
 
1 1 1 1 1.5 new row1 = old row3
 0 1 4 12 0.8 


 0 1 2 3 −0.2  new row3 = old row1
1 2 4 8 1.2

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 17 / 75


Pivoting

ä First pass
 
1 1 1 1 1.5
 0 1 4 12 0.8 
 
 0 1 2 3 −0.2 
0 1 3 7 −0.3 R4 = R4 − 11 R1

ä Second pass
 
1 1 1 1 1.5
 0 1 4 12 0.8 
 
 0 0 −2 −9 −1.0  R3 = R3 − 11 R2
0 0 −1 −5 −1.1 R4 = R4 − 11 R2

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 18 / 75


Pivoting

ä Third pass
 
1 1 1 1 1.5
 0 1 4 12 0.8 
 
 0 0 -2 −9 −1.0 
−1
0 0 0 −0.5 −0.6 R4 = R4 − −2 R3

ä Solution    
x1 −0.8
 x2   6.0 
 x3  =  −4.9
   

x4 1.2

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 19 / 75


Nonsolution

ä No solution
   
1 1 1 −1 1 1 1 −1
 2 2 5 −8  ⇒  0 0 3 −6 
4 4 8 −14 0 0 4 −10
   
2 −5 4 8 −1 −2 1 2
 2 0 2 4 ⇒ 0 1 0 −4 
−1 −2 1 2 0 0 0 8
ä Infinite solutions
   
1 1 1 −1 1 1 1 −1
 2 2 5 −8  →  0 0 3 −6 
4 4 8 −12 0 0 4 −8

x1 + x2 = 1 x3 = −2

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 20 / 75


Gauss-Jordan

ä Transform to D · x = b (D is diagonal)
 
0 1 2 3 −0.2
 0 1 4 12 0.8 
 
 1 1 1 1 1.5 
1 2 4 8 1.2
ä First row pivot
 
1 1 1 1 1.5
 0 1 4 12 0.8 
 
 0 1 2 3 −0.2 
0 1 3 7 −0.3 R4 = R4 − (1)R1
ä Second row pivot
0 −3 −11
 
1 0.7 R1 = R1 − (1)R2
 0 1 4 12 0.8 
 
 0 0 −2 −9 −1.0  R3 = R3 − (1)R2
0 0 −1 −5 −1.1 R4 = R4 − (1)R2
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 21 / 75
Gauss-Jordan

ä Normalize third row R3 = R3 /(−2)

0 −3 −11
 
1 0.7
 0 1 4 12 0.8 
 
 0 0 1 4.5 0.5 
0 0 −1 −5 −1.1

ä Third row pivot


 
1 0 0 2.5 2.2 R1 = R1 − (−3)R3
 0
 1 0 −6 −1.2  R2 = R2 − (4)R3

 0 0 1 4.5 0.5 
0 0 0 −0.5 −0.6 R4 = R4 − (−1)R3

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 22 / 75


Gauss-Jordan

ä Normalize fourth row R4 = R4 /(−0.5)


 
1 0 0 2.5 2.2
 0
 1 0 −6 −1.2 

 0 0 1 4.5 0.5 
0 0 0 1 1.2

ä Fourth row pivot

0 −0.8
 
1 0 0 R1 = R1 − (2.5)R4
 0
 1 0 0 6.0  R2 = R2 − (−6)R4

 0 0 1 0 −4.9  R3 = R3 − (4.5)R4
0 0 0 1 1.2

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 23 / 75


Gauss-Jordan

ä Final matrix
   
1 0 0 0 x1 −0.8
 0 1 0 0   x2
    6.0 
 = 
 0 0 1 0   x3   −4.9 
0 0 0 1 x4 1.2

ä Solution  
−0.8
 6.0 
xi =  
 −4.9 
1.2

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 24 / 75


Dense and Sparse

ä Dense matrix - Most elements are non-zero


ä Sparse matrix - Most elements are zero
ä Sparsity - is the fraction of zero elements.

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 25 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 26 / 75
LU Decomposition

ä A factorization technique that involves one lower triangular


matrix (L) and upper triangular matrix (U ).

ä LU decomposition
1 Set up A · x = b
2 Determine L and U → LU · x = b
Row operations only
No row swapping
Leading ones not necessary
LU decomposition is not unique
3 Let U · x = y then solve for y from L · y = b
4 Solve for x from U · x = y
5 x is also the solution of A · x = b

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 27 / 75


LU Decomposition

ä Solve

x1 + 2x2 + 3x3 = 5
2x1 − 4x2 + 6x3 = 18
3x1 − 9x2 − 3x3 = 6

ä Solution
    
1 2 3 x1 5
 2 −4 6   x2  =  18 
3 −9 −3 x3 6
     
1 2 3 1 0 0 1 2 3
 2 −4 6  → L =  2 −8 0  U = 0 1 0 
3 −9 −3 3 −15 −12 0 0 1

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 28 / 75


LU Decomposition

ä L·y =b
    
1 0 0 y1 5
 2 −8 0   y2  =  18 
3 −15 −12 y3 6
 
5
yi =  −1 
2
ä U·x=y     
1 2 3 x1 5
 0 1 0   x2  =  −1 
0 0 1 x3 2
 
1
xi =  −1 
2
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 29 / 75
Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 30 / 75
Jacobi Iteration

ä The system

a11 x1 + a12 x2 + · · · + a1n xn = b1


a21 x1 + a22 x2 + · · · + a2n xn = b2
..
.
an1 x1 + an2 x2 + · · · + ann xn = bn

has a unique solution.


ä Coefficient matrix A has no zeros on its diagonal.

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 31 / 75


Jacobi Iteration

ä Solve the first equation for x1 , then second for x2 and so on.
1
x1 = (b1 − a12 x2 − a13 x3 − · · · − a1n xn )
a11
1
x2 = (b2 − a21 x1 − a23 x3 − · · · − a2n xn )
a22
..
.
1
xn = (bn − an1 x1 − an3 x3 − · · · − an,n−1 xn−1 )
ann
has a unique solution.
ä Approximate initial values of xi then solve for new values of xi .
 
x1
xi =  ... 
 

xn
ä Repeat replacing old xi with new xi until exit criterion is
satisfied.
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 32 / 75
Jacobi Iteration

ä Solve

5x1 − 2x2 + 3x3 = −1


−3x1 + 9x2 + x3 = 2
2x1 − x2 − 7x3 = 3

ä Write the system as


1 2 3
x1 = − + x2 − x3
5 5 5
2 3 1
x2 = + x1 − x3
9 9 9
3 2 1
x3 = − + x1 − x2
7 7 7
ä Initial approximation, (x1 , x2 , x3 ) = (0, 0, 0).

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 33 / 75


Jacobi Iteration

ä First pass
x1 = − 15 + 25 (0) − 53 (0) = −0.200
2 3 1
x2 = 9 + 9 (0) − 9 (0) ≈ 0.222
3
x3 = − 7 + 7 (0) − 71 (0) ≈ −0.429
2

ä Second pass use (x1 , x2 , x3 ) = (−0.200, 0.222, −0.429) and so


forth.
n 0 1 2 ··· 5 6 7
x1 0.000 −0.200 0.146 · · · 0.185 0.186 0.186
x2 0.000 0.222 0.203 · · · 0.329 0.331 0.331
x3 0.000 −0.429 −0.517 · · · −0.424 −0.423 −0.423

ä Answer
 
0.186
xi =  0.331 
−0.423
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 34 / 75
Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 35 / 75
Gauss-Seidel Method

ä Modification of the Jacobi method and requires fewer iterations.


ä Use updated values of xi as soon as they are computed.
ä Solve again
5x1 − 2x2 + 3x3 = −1
−3x1 + 9x2 + x3 = 2
2x1 − x2 − 7x3 = 3
ä Initial values (0, 0, 0).
ä First pass
1 2 3
x1 = − + (0) − (0) = −0.200
5 5 5
ä Update x1 = −0.200, next equation
2 3 1
x2 = + (−0.200) − (0) ≈ 0.156
9 9 9
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 36 / 75
Gauss-Seidel Method

ä Update x1 = −0.200, x2 = 0.156, next equation


3 2 1
x3 = − + (−0.200) − (0.156) ≈ −0.508
7 7 7
ä First approximation, x1 = −0.200, x2 = 0.156, x3 = −0.508
ä Iterate until convergence
n 0 1 2 3 4 5
x1 0.000 −0.200 0.167 0.191 0.186 0.186
x2 0.000 0.156 0.334 0.333 0.331 0.331
x3 0.000 −0.508 −0.429 −0.422 −0.423 −0.423

ä Jacobi: 7, Gauss-Seidel: 5

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 37 / 75


Divergence

ä Iterative methods may not converge


ä Example of divergence

x1 − 5x2 = −4
7x1 − x2 = 6

ä Solution

x1 = −4 + 5x2
x2 = −6 + 7x1

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 38 / 75


Divergence

ä Initial approximation (0, 0)


ä Jacobi Iteration
n 0 1 2 3 4 5 6
x1 0 −4 −34 −174 −1244 −6124 −42874
x2 0 −6 −34 −244 −1244 −8574 −42874

ä Gauss-Seidel
n 0 1 2 3 4 5
x1 0 −4 −174 −6124 −214374 −7503124
x2 0 −34 −1244 −42874 −1500624 −52521874

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 39 / 75


Diagonally Dominant Matrix

ä An n × n matrix A is strictly diagonally dominant if the absolute


value of each diagonaly element is greater than the sum of the
absolute values of the other elements in the same row. That is

|a11 | > |a12 | + |a13 | + · · · + |a1n |


|a22 | > |a21 | + |a23 | + · · · + |a2n |
..
.
|ann | > |an1 | + |an2 | + · · · + |an,n−1 |

ä If matrix A is strictly diagonally dominant, then the system of


linear equations given by A · x = b has a unique solution to which
the Jacobi and the Gauss-Seidel methods will converge for any
initial approximation.

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 40 / 75


Diagonally Dominant Matrix

ä Consider again the previous example,


" #
x1 − 5x2 = −4 1 −5
=⇒ A =
7x1 − x2 = 6 7 -1

Not strictly diagonally dominant.


ä Rearrange
" #
7x1 − x2 = 6 7 −1
=⇒ A =
x1 − 5x2 = −4 1 -5

Strictly diagonally dominant.

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 41 / 75


Diagonally Dominant Matrix

ä Rearrange
6
x1 = 7 + 17 x2
4
x2 = 5 + 15 x1
ä Initial approximation, (x1 , x2 ) = (0, 0)
n 0 1 2 3 4 5
x1 0.000 0.8571 0.9959 0.9999 1.000 1.000
x2 0.000 0.9714 0.9992 1.000 1.000 1.000

ä Note: None strictly diagonally dominant matrices may still


converge using Jacobi and Gauss-Seidel methods. Convergence
depend on the initial approximation.

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 42 / 75


Iterative Methods

ä For A · x = b
ä Jacobi Iteration
 
(k+1) 1  X (k)
xi = bi − aij xj 
aii
j6=i
or
(k+1)
xi = D−1 (L + U)x(k) + D−1 b
ä Gauss-Seidel Method
 
i−1 n
(k+1) 1  X (k+1)
X (k)
xi = bi − aij xj − aij xj 
aii
j=1 j=i+1
or
(k+1)
xi = (D − L)−1 (Ux(k) + b)
where the matrices D, -L, and -U represent the diagonal, strictly
lower triangular, and strictly upper triangular parts of matrix A
respectively.
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 43 / 75
Symmetric Positive Definite

ä Symmetric positive definite (SPD) matrices


Let A ∈ Rn×n be given. if A satisfies

A = AT

and
xT Ax > 0, all x 6= 0
then we say that A is symmetric positive definite or A ∈ SPD.
Another way of putting it,

xT Ax = |x| · |Ax| · cos θ

ä Cholesky Theorem
Let A ∈ Rn×n be given, with A ∈ SPD. Then
1. A is nonsingular.
2. There exists a lower triangular matrix G ∈ Rn×n , with gii > 0,
such that GGT = A.
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 44 / 75
Norm and Spectral Radius

ä Norm
Let A ∈ Rn×n be given. Define T = M−1 N, where A = M − N
(splitting method). Then the iteration

x(k+1) = M−1 Nx(k) + M−1 b

converges for all initial guesses x(0) if and only if there exists a
norm k · k such that kTk < 1.

ä Spectral Radius of a Matrix


Let A ∈ Rn×n be given. Then the spectral radius of A, denoted by
ρ(A), is the largest (in magnitude) of all the eigenvalues of A.

ρ(A) = max|λ|

where the maximum is taken over all the eigenvalues of A.

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 45 / 75


Jacobi and Gauss-Seidel Convergence

ä Jacobi and Gauss-Seidel convergence theorem


1. If A is diagonally dominant, then both Jacobi and Gauss-Seidel
converge, and Gauss-Seidel converges faster.
2. If A ∈ SPD, then both Jacobi and Gauss-Seidel will converge.

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 46 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 47 / 75
Successive Overrelaxation (SOR)

ä Accelerate the convergence of Gauss-Seidel method.


ä Extrapolation takes the form of a weighted average between the
previous iterate and the computed Gauss-Seidel iterate successively
for each component.
(k+1) (k+1) (k)
xi = ωxi + (1 − ω)xi
where ω is the extrapolation factor (or relaxation parameter),
(k+1)
xi is the computed Gauss-Seidel iterate.
ä A necessary condition for convergence of the SOR method is
0<ω<2
ä Cases
Case 1: ω =1 =⇒ Gauss-Seidel
Case 2: ω >1 =⇒ Overrelaxation
Case 3: ω <1 =⇒ Underrelaxation
Case 4: ω =0 =⇒ No iteration
Case 5: ω ≥2 =⇒ Divergent
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 48 / 75
Successive Overrelaxation (SOR)

ä For A · x = b
ä If A is symmetric positive definite, then 0 < ω < 2 is the
sufficient condition for convergence.
 
i−1 n
(k+1) 1  X (k+1)
X (k)
xi = bi − aij xj − aij xj 
aii
j=1 j=i+1

(k+1) (k+1) (k)


xi = ωxi + (1 − ω)xi
or

x(k+1) = (D − ωL)−1 (ωU + (1 − ω)D)x(k) + ω(D − ωL)−1 b

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 49 / 75


Successive Overrelaxation (SOR)

ä Example
   
−4 1 1 1 x1 1
 1 −4 1 1   x2   1 
  = 
 1 1 −4 1   x3   1 
1 1 1 −4 x4 1

ä Solution (x1 , x2 , x3 , x4 ) = (−1, −1, −1, −1)

ω Iterations ω Iterations
1.0 24 ← Gauss-Seidel 1.5 18
1.1 18 1.6 24
1.2 13 1.7 35
1.3 11 ← Optimum 1.8 55
1.4 14 1.9 100

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 50 / 75


Successive Overrelaxation (SOR)

ä “Optimal” ω
The “best” value of ω for the SOR iteration (“best” in the sense
that it leads to the smallest spectral radius for the SOR iteration
matrix) is given by
2
ω= p
1 + 1 − ρ2 (A)

ä Need to solve an eigenvalue problem first.


ä In practice, an SOR iteration is usually started with ω = 1.

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 51 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 52 / 75
Symmetric Successive Overrelaxation (SSOR)

ä Symmetric Successive Overrelaxation

x(k+1) = B1 B2 x(k) + ω(2 − ω)(D + ωU)−1 D(D + ωL)−1 b

where
Backward SOR Sweep: B1 = (D + ωU)−1 (−ωL + (1 − ω) D)
Forward SOR Sweep: B2 = (D + ωL)−1 (−ωU + (1 − ω) D)

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 53 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 54 / 75
Line Search Method

ä To solve A · x = b, a family of iterative optimization methods


where the iteration is given by

x(k+1) = x(k) + α(k) p(k)

α(k) - step length p(k) - search direction


ä Matrix A is symmetric positive definite if

xT Ax > 0

ä Quadratic equation
1
ϕ(x) = xT Ax − bT x
2
ä Choose an initial position x0 and for each step walk along a
direction (line) so that ϕ(x(k+1) ) ≤ ϕ(x(k) )

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 55 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 56 / 75
Steepest Descent

ä Quadratic form
1
ϕ(x) = xT Ax − bT x + c
2
Gradient of the quadratic form,
 

 ∂x1 ϕ(x) 
 ∂
 
ϕ(x)

0
 
ϕ (x) =  ∂x
 2.

..

 
 
 ∂ 
ϕ(x)
∂xn

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 57 / 75


Steepest Descent

If A is symmetric,
ϕ0 (x) = A · x − b
Setting the gradient to zero then,

A·x=b

If A is positive definite and symmetric, then the solution is a


minimum of ϕ(x). If A is symmetric (whether positive definite or
not),
1
ϕ(p) = ϕ(x) + (p − x)T A(p − x)
2
where p is an arbitrary point.

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 58 / 75


Steepest Descent

ä Residual
r(k) = b − A · x(k)
Note: If r(k) = 0 then x(k) is a solution.
ä Solve for α
r(k)T r(k)
α(k) = (k)T (k)
r Ar
ä Update x
x(k+1) = x(k) + α(k) r(k)

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 59 / 75


Steepest Descent

Example:    
3 2 2
A= , b= , c=0
2 6 −8
Initialize:  
1
x0 =
−1

Adapted from An Introduction to the Conjugate Gradient Method Without the


Agonizing Pain

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 60 / 75


Steepest Descent

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 61 / 75


Steepest Descent

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 62 / 75


Steepest Descent

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 63 / 75


Steepest Descent

Iteration no. 1:
r1 = b − A · x0
       
2 3 2 1 1
r1 = − · =
−8 2 6 −1 −4
 
  1
1 −4
r1T r1 −4 17
α1 = T =    = = 0.204819
r1 Ar1   3 2 1 83
1 −4
2 6 −4
     
1 1 1.205
x1 = x0 + α1 r1 = + 0.204819 =
−1 −4 −1.819

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 64 / 75


Steepest Descent

Iteration no. 2:
r2 = b − A · x0
      
2 3 2 1.205 2.023
r2 = − · =
−8 2 6 −1.819 0.504
 
  2.023
2.023 0.504
rT r2 0.504 4.347
α2 = T2 =   = = 0.243120
r2 Ar2   3 2 2.023 17.880
2.023 0.504
2 6 0.504
     
1.205 2.023 1.697
x2 = x1 + α2 r2 = + 0.243120 =
−1.819 0.504 −1.806

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 65 / 75


Steepest Descent

Iteration 0 1 2 3 4 5 6
x1 1 1.205 1.697 1.921 1.964 1.988 1.989
x2 -1 -1.819 -1.806 -2.02 -1.977 -2.002 -2.001

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 66 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 67 / 75
Conjugate Gradient

ä Initialize
r0 = b − A · x0 = p0
ä Search α(k) along p(k)

r(k)T p(k)
α(k) =
p(k)T Ap(k)
ä Update x and r

x(k+1) = x(k) + α(k) r(k) ; r(k+1) = b − A · x(k+1)

ä Search direction p(k+1)

r(k+1)T Ap(k)
β (k) = − ; p(k+1) = r(k+1) + β (k) p(k)
p(k)T Ap(k)

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 68 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 69 / 75
Others

ä Cramer’s rule
ä Minimum Residual
ä Biconjugate gradient
ä Tridiagonal matrix

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 70 / 75


Outline
1 Linear Equations
Matrix
Matrix Equation
2 An Illustration
3 Techniques
Gaussian Elimination
LU Decomposition
Jacobi Iteration
Gauss-Seidel Method
SOR
SSOR
Line Search Method
Steepest Descent
Conjugate Gradient
Others
4 Vector Norms
5 Credits
Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 71 / 75
Vector Norm

ä A method to measure the “magnitude” of a vector for error


analysis.
ä “Length” of a vector in 3D to n dimensions by defining a norm.
ä Given a vector (linear) space V , then a norm, denoted by kxk
for x ∈ V , is a real number such that

Size is positive. kxk > 0, ∀x 6= 0, (k0k = 0)


Scaled as the vector is scaled. kαxk = |α|kxk, α ∈ R
Triangle inequality (notion of distance in R3 ).
kx + yk ≤ kxk + kxk

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 72 / 75


Vector Norm

ä Commonly used norms and normed linear spaces


1 The linear space Rn (Euclidean space) where
x = (x1 , x2 , · · · , xi · · · , xn ) together with the norm

n
!1
X p

kxkp = |xi |p , p ≥ 1,
i=1

is known as the Lp normed linear space.


2 Infinity or maximum norm given by

kxk∞ = max (|xi |)


1≤i≤n

The vector space Rn together with the infinity norm is


commonly denoted by L∞ .

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 73 / 75


Vector Norm

ä Example:
Consider the vector x = (3, −1, 2, 0, 4) which belongs to the vector
space R5 .
L1 : kxk1 = |3| + | − 1| + |2| + |0| + |4| = 10.
p √
L2 : kxk2 = |3|2 + | − 1|2 + |2|2 + |0|2 + |4|2 = 30

L∞ : kxk∞ = max(|3|, | − 1|, |2|, |0|, |4|) = 4

ä How about matrix norms?

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 74 / 75


Credits

ä Title: Linear Equations


ä Version: R0
ä Encoding: MRV
ä Date: September 26, 2023

Copyright c 2020 M. Vasquez, University of the Philippines. All rights reserved. 75 / 75

You might also like