Solution of Linear Systems
1
in matrix form
• AX=B can be transformed into an equivalent system
which may be easier to solve.
• Equivalent system has the same solution as the original
system.
• Allowable operations during the transformation are:
2
Gaussian Elimination for solving [A][X ] = [C]
consists of 2 steps
1. Forward Elimination of unknowns
The goal of Forward Elimination is to transform the coefficient matrix into an
Upper Triangular Matrix
⎡ 25 5 1⎤ ⎡25 5 1 ⎤
⎢ 64 8 1⎥ → ⎢ 0 − 4.8 − 1.56⎥
⎢ ⎥ ⎢ ⎥
⎢⎣144 12 1⎥⎦ ⎢⎣ 0 0 0.7 ⎥⎦
2. Back Substitution
The goal of Back Substitution is to solve each of the equations using the upper
triangular matrix.
3
Gaussian Elimination
Example 3.16
4
Forward Elimination
Linear Equations
A set of n equations and n unknowns
a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + ... + a2 n xn = b2
. .
. .
. .
an1 x1 + an 2 x2 + an 3 x3 + ... + ann xn = bn
Forward Elimination
Transform to an Upper Triangular Matrix
Step 1: Eliminate x1 in 2nd equation using equation 1 as
the pivot equation (pivot row)
⎡ Eqn1⎤
⎢ ⎥ × (a21 )
⎣ a11 ⎦
Which will yield
a21 a a
a21 x1 + a12 x2 + ... + 21 a1n xn = 21 b1
a11 a11 a11
a11:pivot element, row 1:pivot row
5
Forward Elimination
Zeroing out the coefficient of x1 in the 2nd equation.
Subtract this equation from 2nd equation
⎛ a ⎞ ⎛ a ⎞ a
⎜⎜ a22 − 21 a12 ⎟⎟ x2 + ... + ⎜⎜ a2 n − 21 a1n ⎟⎟ xn = b2 − 21 b1
⎝ a11 ⎠ ⎝ a11 ⎠ a11
Or Where
'
x2 + ... + a2' n xn = b2' a21
a22 '
a22 = a22 − a12
a11
M
a21
a2' n = a2n − a1n
a11
Forward Elimination
Repeat this procedure for the remaining
equations to reduce the set of equations as
a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1
'
a22 x2 + a23
'
x3 + ... + a2' n xn = b2'
'
a32 x2 + a33
'
x3 + ... + a3' n xn = b3'
. . .
. . .
. . .
an' 2 x2 + an' 3 x3 + ... + ann
'
xn = bn'
6
Forward Elimination
Step 2: Eliminate x2 in the 3rd equation.
Equivalent to eliminating x1 in the 2nd equation
using equation 2 as the pivot equation.
⎡ Eqn 2 ⎤
Eqn3 − ⎢ ⎥ × (a32 )
⎣ a22 ⎦
Forward Elimination
This procedure is repeated for the remaining
equations to reduce the set of equations as
a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1
'
a22 x2 + a23
'
x3 + ... + a2' n xn = b2'
"
a33 x3 + ... + a3" n xn = b3"
. .
. .
. .
an" 3 x3 + ... + ann
"
xn = bn"
7
Forward Elimination
Continue this procedure by using the third equation as the pivot
equation and so on.
At the end of (n-1) Forward Elimination steps, the system of
equations will look like:
a11 x1 + a12 x 2 + a13 x3 + ... + a1n x n = b1
'
a22 x2 + a23
'
x3 + ... + a2' n xn = b2'
"
a33 x3 + ... + an" xn = b3"
. .
. .
. .
( n −1) (n −1 )
ann xn = bn
Forward Elimination
At the end of the Forward Elimination steps
⎡a11 a12 a13 L a1n ⎤ ⎡ x1 ⎤ ⎡ b1 ⎤
⎢ '
a22 '
a23 L a2' n ⎥⎥ ⎢⎢ x2 ⎥⎥ ⎢⎢ b2' ⎥⎥
⎢
⎢ "
a33 L a3" n ⎥ ⎢ x3 ⎥ = ⎢ b3" ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ M M ⎥⎢ M ⎥ ⎢ M ⎥
⎢⎣ ( n −1) ⎥ ⎢
ann ⎦ ⎣ xn ⎥⎦ ⎢⎣bn ⎥⎦
(n -1 )
8
Back Substitution
The goal of Back Substitution is to solve each of
the equations using the upper triangular matrix.
⎡a11 a12 a13 ⎤ ⎡ x1 ⎤ ⎡ b1 ⎤
⎢0 a a23 ⎥⎥ ⎢⎢ x 2 ⎥⎥ = ⎢⎢b2 ⎥⎥
⎢ 22
⎢⎣ 0 0 a33 ⎥⎦ ⎢⎣ x 3 ⎥⎦ ⎢⎣b3 ⎥⎦
Example of a system of 3 equations
Back Substitution
Start with the last equation because it has only
one unknown
bn( n −1)
x n = ( n −1)
a nn
Solve the second from last equation (n-1)th
using xn solved for previously.
This solves for xn-1.
9
Back Substitution
Representing Back Substitution for all equations
by formula
bi(i −1) − ∑ aij(i −1) x j
n
j = i +1
xi = For i=n-1, n-2,….,1
aii(i −1)
and
bn( n −1)
x n = ( n −1)
a nn
Potential Pitfalls
-Division by zero: May occur in the forward elimination steps.
-Round-off error: Prone to round-off errors.
Improvements
Increase the number of significant digits
Decreases round off error
Does not avoid division by zero
Gaussian Elimination with Pivoting
Avoids division by zero
Reduces round off error
10
Division by zero
Trivial pivoting
11
Round-off error
Pivoting to reduce error
• Partial pivoting
• Scaled partial pivoting
12
Partial Pivoting
Gaussian Elimination with partial pivoting applies row switching to
normal Gaussian Elimination.
How?
At the beginning of the kth step of forward elimination, find the maximum of
akk , ak +1,k ,................, ank
( find max of all elements in the column on or below the main diagonal )
If the maximum of the values is a pk In the pth row, k ≤ p ≤ n,
then switch rows p and k.
Partial Pivoting
What does it Mean?
Gaussian Elimination with Partial Pivoting ensures that
each step of Forward Elimination is performed with the
pivoting element |akk| having the largest absolute value.
13
Partial Pivoting: Example
Consider the system of equations
10 x1 − 7 x 2 = 7
− 3x1 + 2.099x 2 + 3x 3 = 3.901
5x 1 − x 2 + 5x 3 = 6
In matrix form
⎡ 10 7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤
⎢− 3 2.099 6⎥ ⎢ x ⎥ ⎢3.901⎥
⎢ ⎥ ⎢ 2⎥ = ⎢ ⎥
⎢⎣ 5 − 1 5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣ 6 ⎥⎦
Solve using Gaussian Elimination with Partial Pivoting using five
significant digits with chopping
Partial Pivoting: Example
Forward Elimination: Step 1
Examining the values of the first column
|10|, |-3|, and |5| or 10, 3, and 5
The largest absolute value is 10, which means, to follow the
rules of Partial Pivoting, we switch row1 with row1.
Performing Forward Elimination
⎡ 10 7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤ ⎡10 −7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤
⎢− 3 2.099 6⎥ ⎢ x ⎥ = ⎢3.901⎥
⎢
⎢⎣ 5
⎥⎢ 2 ⎥ ⎢ ⎥
− 1 5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣ 6 ⎥⎦
⇒ ⎢ 0 − 0.001 6⎥ ⎢ x ⎥ = ⎢6.001⎥
⎢
⎢⎣ 0 2.5
⎥⎢ 2 ⎥ ⎢ ⎥
5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣ 2.5 ⎥⎦
14
Partial Pivoting: Example
Forward Elimination: Step 2
Examining the values of the first column
|-0.001| and |2.5| or 0.0001 and 2.5
The largest absolute value is 2.5, so row 2 is switched with
row 3
Performing the row swap
⎡10 −7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤ ⎡10 −7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤
⎢ 0 − 0.001 6⎥ ⎢ x ⎥ = ⎢6.001⎥
⎢
⎢⎣ 0 2.5
⎥⎢ 2 ⎥ ⎢ ⎥
5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣ 2.5 ⎥⎦
⇒ ⎢0
⎢ 2 . 5 5⎥⎥ ⎢⎢ x2 ⎥⎥ = ⎢⎢ 2.5 ⎥⎥
⎢⎣ 0 − 0.001 6⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣6.001⎥⎦
Partial Pivoting: Example
Forward Elimination: Step 2
Performing the Forward Elimination results in:
⎡10 − 7 0 ⎤ ⎡ x1 ⎤ ⎡ 7 ⎤
⎢ 0 2.5 5 ⎥⎥ ⎢⎢ x 2 ⎥⎥ = ⎢⎢ 2.5 ⎥⎥
⎢
⎢⎣ 0 0 6.002⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣6.002⎥⎦
15
Partial Pivoting: Example
Back Substitution
Solving the equations through back substitution
⎡10 − 7
6.002
0 ⎤ ⎡ x1 ⎤ ⎡ 7 ⎤ x3 = =1
⎢ 0 2.5 5 ⎥⎥ ⎢⎢ x 2 ⎥⎥ = ⎢⎢ 2.5 ⎥⎥ 6.002
⎢
⎢⎣ 0 0 6.002⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣6.002⎥⎦ 2.5 − 5 x 2
x2 = =1
2.5
7 + 7 x 2 − 0 x3
x1 = =0
10
Scaled partial pivoting
16
Scaled partial pivoting example
Potential Pitfalls
-Division by zero: May occur in the forward elimination steps.
-Round-off error: Prone to round-off errors.
Improvements
Increase the number of significant digits
Decreases round off error
Does not avoid division by zero
Gaussian Elimination with Pivoting
Avoids division by zero
Reduces round off error
17
LU Decomposition
(Triangular Factorization)
LU Decomposition
A non-singular matrix [A] has a traingular factorization
if it can be expressed as
[A] = [L][U ]
where
[L] = lower triangular martix
[U ] = upper triangular martix
18
LU Decomposition
Method: [A] Decompose to [L] and [U]
⎡1 0 0⎤ ⎡u11 u12 u13 ⎤
[A] = [L][U ] = ⎢⎢l 21 1 0⎥⎥ ⎢⎢ 0 u 22 u 23 ⎥⎥
⎢⎣l 31 l 32 1⎥⎦ ⎢⎣ 0 0 u 33 ⎥⎦
[U] is the same as the coefficient matrix at the end of the forward
elimination step.
[L] is obtained using the multipliers that were used in the forward
elimination process
Example
19
LU Decomposition
Given[A][X ] = [C ]
Decompose [A ] into [L] and [U ] ⇒ [L][U ][ X ] = [C ]
Define [Z ] = [U ][X ]
Then solve [L][Z] = [C ] for [Z ]
And then solve [U ][X] = [Z ] for [X ]
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Solve the following set of ⎡ 25 5 1⎤ ⎡ a 1 ⎤ ⎡106.8 ⎤
linear equations using LU ⎢ 64 8 1⎥ ⎢a ⎥ = ⎢177.2 ⎥
⎢ ⎥ ⎢ 2⎥ ⎢ ⎥
Decomposition ⎢⎣144 12 1⎥⎦ ⎢⎣a 3 ⎥⎦ ⎢⎣279.2⎥⎦
Using the procedure for finding the [L] and [U] matrices
⎡ 1 0 0⎤ ⎡25 5 1 ⎤
[A] = [L][U ] = ⎢2.56 1 0⎥ ⎢ 0 − 4.8 − 1.56⎥⎥
⎢ ⎥ ⎢
⎢⎣5.76 3.5 1⎥⎦ ⎢⎣ 0 0 0.7 ⎥⎦
20
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
⎡ 1 0 0⎤ ⎡ z1 ⎤ ⎡106.8 ⎤
Set [L][Z ] = [C ] ⎢2.56 1 0⎥ ⎢ z ⎥ = ⎢177.2 ⎥
⎢ ⎥⎢ 2 ⎥ ⎢ ⎥
⎢⎣5.76 3.5 1⎥⎦ ⎢⎣ z 3 ⎥⎦ ⎢⎣279.2⎥⎦
⎡ z1 ⎤ ⎡ 106.8 ⎤
Solve for [Z ] [Z ] = ⎢⎢ z2 ⎥⎥ = ⎢⎢− 96.21⎥⎥
⎢⎣ z3 ⎥⎦ ⎢⎣ 0.735 ⎥⎦
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
⎡25 5 1 ⎤ ⎡ a1 ⎤ ⎡ 106.8 ⎤
Set [U ][X ] = [Z ] ⎢ 0 − 4.8 − 1.56⎥ ⎢a ⎥ = ⎢- 96.21⎥
⎢ ⎥ ⎢ 2⎥ ⎢ ⎥
⎢⎣ 0 0 0.7 ⎥⎦ ⎢⎣ a3 ⎥⎦ ⎢⎣ 0.735 ⎥⎦
⎡ a1 ⎤ ⎡0.2900⎤
Solve for [X ] ⎢a ⎥ = ⎢ 19.70 ⎥
⎢ 2⎥ ⎢ ⎥
⎢⎣ a3 ⎥⎦ ⎢⎣ 1.050 ⎥⎦
21
Factorization with Pivoting
Factorization with Pivoting
• Theorem. If A is a nonsingular matrix, then there
exists a permutation matrix P so that PA has an LU-
factorization
PA = LU.
• Theorem (PA = LU; Factorization with
Pivoting). Given that A is nonsingular. The solution X
to the linear system AX=B , is found in four steps:
1. Construct the matrices L,U and P .
2. Compute the column vector PB .
3. Solve LY=PB for Y using forward substitution.
4. Solve UX=Y for X using back substitution.
22
⎡ 1 0 0⎤
[L] = ⎢⎢− 2 1 0⎥⎥
⎢⎣ 4 0 1⎥⎦
Is LU Decomposition better or faster than
Gauss Elimination?
Let’s look at computational time.
n = number of equations
n3
To decompose [A], time is proportional to 3
To solve [U ][ X ] = [C ] and [L][Z] = [C]
n2
time proportional to
2
23
Total computational time for LU Decomposition is proportional to
2
n3 n n3
+ 2( ) or + n2
3 2 3
Gauss Elimination computation time is proportional to
n3 n2
+
3 2
How is this better?
LU Decomposition
What about a situation where the [C] vector changes?
In LU Decomposition, LU decomposition of [A] is independent
of the [C] vector, therefore it only needs to be done once.
Let m = the number of times the [C] vector changes
The computational times are proportional to
n3 n2 n3
Gauss Elimination = m( + ) LU decomposition = + m(n 2 )
3 2 3
Consider a 100 equation set with 50 right hand side vectors
LU Decomposition = 8.33 × 105 Gauss Elimination = 1.69 × 107
24
Simultaneous Linear Equations:
Iterative Methods
Jacobi and Gauss-Seidel Method
25
-Algebraically solve each linear equation for xi
-Assume an initial guess
-Solve for each xi and repeat
-Check if error is within a pre-specified tolerance.
Jacobi Gauss-Seidel
26
Algorithm
A set of n equations and n unknowns:
If: the diagonal elements are
a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1 non-zero
a21 x1 + a22 x2 + a23 x3 + ... + a2n xn = b2 Rewrite each equation solving
. .
. . for the corresponding unknown
. .
ex:
an1 x1 + an 2 x2 + an 3 x3 + ... + ann xn = bn
First equation, solve for x1
Second equation, solve for x2
Algorithm
Rewriting each equation
c1 − a12 x 2 − a13 x3 KK − a1n x n From Equation 1
x1 =
a11
c2 − a21 x1 − a23 x3 KK − a2 n xn
x2 = From equation 2
a22
M M M
cn −1 − an −1,1 x1 − an −1, 2 x2 KK − an −1,n − 2 xn − 2 − an −1,n xn From equation n-1
xn −1 =
an −1,n −1
cn − an1 x1 − an 2 x2 − KK − an ,n −1 xn −1 From equation n
xn =
ann
27
Stopping criterion
Absolute Relative Error
x inew − x iold
εa = × 100
i
x inew
The iterations are stopped when the absolute relative error is
less than a prespecified tolerance for all unknowns.
28
Given the system of equations With an initial guess of
12x 1 + 3x 2 - 5x 3 = 1
⎡ x1 ⎤ ⎡1⎤
x 1 + 5x 2 + 3x 3 = 28 ⎢ x ⎥ = ⎢0 ⎥
⎢ 2⎥ ⎢ ⎥
3x1 + 7x2 + 13x3 = 76 ⎢⎣ x3 ⎥⎦ ⎢⎣1⎥⎦
Rewriting each equation
⎡12 3 − 5⎤ ⎡ a1 ⎤ ⎡ 1 ⎤
⎢ 1 5 3 ⎥ ⎢a ⎥ = ⎢28⎥
⎢ ⎥ ⎢ 2⎥ ⎢ ⎥
⎢⎣ 3 7 13 ⎥⎦ ⎢⎣ a3 ⎥⎦ ⎢⎣76⎥⎦
1 − 3 x 2 + 5 x3 1 − 3(0 ) + 5(1)
x1 = x1 = = 0.50000
12
12
28 − (0.5) − 3(1)
28 − x1 − 3 x3 x2 = = 4.9000
x2 = 5
5
76 − 3 x1 − 7 x 2 76 − 3(0.50000 ) − 7(4.9000 )
x3 = x3 = = 3.0923
13 13
The absolute relative error
Initial guess
⎡ x1 ⎤ ⎡1⎤ 0.50000 − 1.0000
⎢ x ⎥ = ⎢0 ⎥ ∈a 1 = × 100 = 67.662%
⎢ 2⎥ ⎢ ⎥ 0.50000
⎣⎢ x3 ⎦⎥ ⎣⎢1⎦⎥
4.9000 − 0
∈a 2 = × 100 = 100.00%
4.9000
After Iteration #1
⎡ x1 ⎤ ⎡0.5000⎤ 3.0923 − 1.0000
⎢ x ⎥ = ⎢4.9000⎥ ∈a 3 = × 100 = 67.662%
⎢ 2⎥ ⎢ ⎥
3.0923
⎣⎢ x3 ⎦⎥ ⎣⎢3.0923⎦⎥
The maximum absolute relative error after the first iteration is 100%
29
Repeating more iterations, the following values are obtained
Iteration a1 εa 1 a2 εa a3 εa
2 3
1 0.50000 67.662 4.900 100.00 3.0923 67.662
2 0.14679 240.62 3.7153 31.887 3.8118 18.876
3 0.74275 80.23 3.1644 17.409 3.9708 4.0042
4 0.94675 21.547 3.0281 4.5012 3.9971 0.65798
5 0.99177 4.5394 3.0034 0.82240 4.0001 0.07499
6 0.99919 0.74260 3.0001 0.11000 4.0001 0.00000
⎡ x1 ⎤ ⎡0.99919⎤
The solution obtained ⎢ ⎥ ⎢ ⎥
⎢ x2 ⎥ = ⎢ 3.0001 ⎥
⎢⎣ x3 ⎥⎦ ⎢⎣ 4.0001 ⎥⎦
⎡ x1 ⎤ ⎡1⎤
the exact solution ⎢ x ⎥ = ⎢ 3⎥
⎢ 2⎥ ⎢ ⎥
⎢⎣ x3 ⎥⎦ ⎢⎣4⎥⎦
30
What went wrong?
Even though done correctly, the answer is not converging to the
correct answer
This example illustrates a pitfall of Jacobi/ Gauss-Siedel method: not
all systems of equations will converge.
Is there a fix?
Diagonally dominant: [A] in [A] [X] = [C] is diagonally dominant if:
n
a ii > ∑ a ij
j=1
j≠ i
The coefficient on the diagonal must be greater than the sum of the other
coefficients in that row.
31