Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
55 views93 pages

Finite Difference Method For PDE

The document discusses the classification and physical significance of partial differential equations (PDEs). It defines elliptic, hyperbolic, and parabolic PDEs based on the properties of the coefficients in the PDE. Elliptic PDEs do not allow wave-like solutions, hyperbolic PDEs do, and parabolic PDEs allow them in one direction. The classification determines the influence of boundary conditions and existence of characteristic solutions. Convection-diffusion equations are also discussed as an important example.

Uploaded by

yelinelif1993
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views93 pages

Finite Difference Method For PDE

The document discusses the classification and physical significance of partial differential equations (PDEs). It defines elliptic, hyperbolic, and parabolic PDEs based on the properties of the coefficients in the PDE. Elliptic PDEs do not allow wave-like solutions, hyperbolic PDEs do, and parabolic PDEs allow them in one direction. The classification determines the influence of boundary conditions and existence of characteristic solutions. Convection-diffusion equations are also discussed as an important example.

Uploaded by

yelinelif1993
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

Finite Difference Method

for PDE
Y V S S Sanyasiraju
Professor, Department of Mathematics
IIT Madras, Chennai 36

1
Classification of the Partial Differential Equations

• Consider a scalar second order partial differential equation


(PDE) in ‘d ’ independent variables, given by

     
LT     a   b  c  s (4.1.1)
t x  x  x
where , s     0, T   ,    d ,     d

• Unless otherwise stated, repeated index stands for summation

• The stationary form of the Eq. (4.1.1) is given by


  2  
LS     a   b  c  s (4.1.2)
 x x x
 
2
• Equation (4.1.2) is obtained after

– Renaming ba - aba ,x as bα and also using x x  x x


a

– Further, the coefficients of the cross derivates are made equal


by enforcing aαβ = aβα = 1/2 (aαβ + aβα).

• Arranging the coefficients aαβ (real) in a matrix form, say, A,


gives a symmetric matrix of size (d x d) with real coefficients.

• Such a matrix will have d real eigenvalues and the corresponding


eigenvectors are linearly independent.

3
Classification
• Differential equation (4.1.2) is called

– Elliptic if the eigenvalues of A are non-zero and have the


same sign
– Hyperbolic if only one eigenvalue has sign different from all
others
– Parabolic if precisely one eigenvalue is zero, while the other
have the same sign and rank of (A, b) is equal to the
dimension of the problem with the vector b has the elements
of bα.

• For the elliptic case, A is definite and can be treated as positive


definite. If not, the equation can be multiplied with a minus one
(-1) and can be made to Positive definite. 4
• Tensor form of second order partial differential equation, given
in (4.1.1) and (4.1.2), is good for classifying a PDE in ‘ d ’
independent variables.

• However, for ‘d = 2’, it is easier to use the quasi-linear form of


equation (4.1.2), in its explicit form, given by

 2  2  2  
A( x, y ) 2  B( x, y )  C ( x, y ) 2  D E  F  0 (4.13)
x xy y x y

where A, B and C are functions of the independent variables x


and y alone.

• Then the classification of the PDE (4.1.3) depends on the sign of


the discriminant (B2-4AC). Notice that the coefficient of the first
derivative terms don’t contribute to the classifications 5
• Equation (4.1.3) is hyperbolic in the regions where the
discriminant is positive and is elliptic when it is negative. The
equation is called parabolic whenever (B2-4AC) is zero.

• Well known examples for elliptic, hyperbolic and parabolic PDE


are :

 2  2
Poisson’s equation  2  f ( x, y )
x 2
y

 2 2  
2

Wave equation c and


t 2 x 2

  2
Unsteady diffusion equation K 2 , respectively.
t x

6
Classification of Non – Stationary Equations

• For the transient case, a first derivative with respect to time is an


additional term when compared to the corresponding stationary
differential equation.

• Since the first derivatives does not contribute to the


classification, the existing elements of A are same for both
stationary and transient cases.

• However, due to the increase of the number of independent


variables by one (time variable t), the number of rows and
columns of A will increase by one with zero elements.

• This will result into an additional zero eigenvalue. 7


• Therefore, from the classification in terms eigenvalues, if the
stationary equation is elliptic then the corresponding unsteady
equation is parabolic.

• The same in equation form can be written as


  As 0   bs 
 LS   s  AT    and b   
t  0 0 1 

where AT and AS are the matrices with coefficients of the second


derivative terms of transient and stationary equations,
respectively.

8
Initial and Boundary Conditions

• Unsteady equations require initial conditions given by


(4.1.4a)
  0, x   0 ( x), x  , t  0

• And the boundary conditions, on ∂Ω, (0 < t < T), which can be
any one of the following:
1. Dirichlet  t, x   f t, x 


2. Neumann K t, x   f t, x  (4.1.4b)
n


3. Robin K  t , x   at , x   f  t , x 
n 9
• Indirectly boundary conditions play a very important role in the
classification of the PDE.

• For elliptic equations, the boundary conditions, which have to be


prescribed all along the boundary of the domain, influence the
solution everywhere.

• For hyperbolic equations, a local change in the boundary data


influences only a part of the domain resulting into a domain of
influence and domain of dependence. The CFL condition of the
numerical schemes used in CFD is based on this principle.

• Finally, for parabolic equations, a time like variable can be


identified and any changes in the boundary data influences the
entire domain however only at the later times.
10
Physical Significance of the Classification

• To understand the significance of the classification of the PDE,


one should look at the physical nature of the solutions produced
by these equations.

• To make the computations easier, fix all the coefficients, except


‘aαβ’ in (4.1.2) to be zero. The simplified equation is then given by
    (4.1.5)
 a 0
x  x 


Assume a plane wave solution, given by


  eiw ( x ) , w( x )  n .x  b

where w( x ) describes the wave fronts or characteristic surfaces


given by w( x ) = constant 11
Substituting the wave solution in (4.1.5) gives
a   n  n  0 with x  grad w or nT A n  0

• Since A is symmetric and real, eigenvalues λα and the


corresponding orthogonal eigenvectors  (  ) ,  1, 2....d are all real.
Fixing n  c  (  )
(since (  )T    1 ), gives
( (  ) )T cT A c  (  )    c 2  0

• If (4.1.5) is elliptic then, since all the eigenvalues are positive,


therefore no possibility of λα cα2 (summation due to repeated
index) becoming zero. This contradicts the existence of any
wave like solution in this case. If (4.1.5) is hyperbolic, then the
(single) negative eigenvalue can be expressed in terms of the
other eigenvalues and coefficients cα.
12
• That shows the existence of wave like solution for hyperbolic
type equations.

• Finally, if (4.1.5) is parabolic then exactly one eigenvalue is


zero, therefore one can fix the corresponding cα as non-zero and
all the other coefficients to be zero. That is, one particular
direction can be identified in which wave like solution can
exists.

• To conclude, there is no possibility of wave like solutions to


exist for elliptic equations, however a clear case of existence of
wave like solutions, in any of the many directions, for the
hyperbolic equations.

• And finally, existence of exactly one particular direction in


which wavelike solutions may form for parabolic equations.
13
Convection – Diffusion Equation (CDE)

• A general convection – diffusion equation is given by


  2 
  u  s, x     t  T ,   0,   1, 2,..., d (4.1.13)
t x x x

where ε is the constant diffusion parameter.

• For small values of ε, Eq. (4.1.13) has layer solutions (solution


varies very rapidly in a small region(s) particularly towards any
physical boundary of the problem).

• The study of Eq. (4.1.13), when the diffusion parameter is


small, is called the boundary layer theory or singular
perturbation theory.

14
• The stationary form of the convection-diffusion equation is
given by
 2 
  u s (4.1.14)
x x x
• The convection – diffusion equation which is also known as
Burger’s equation is similar to some of the governing equations
of the fluid dynamics but without any pressure term.

• Further, due to the layer behavior of its solutions obtaining


accurate numerical solutions is a challenge

15
Non Dimensionlization

• It is better to non-dimensionalize the governing equations


representing any physical phenomena before they are solved
numerically.

• If L and V are the characteristic length and velocity components,


respectively, then the non-dimensionalization of the variables in
(4.1.14) can be done using

 u ' tV ' x ' sL



'
u 
'
t  x  s (4.1.15)
 V L L V

16
• Using the non–dimensional variables (4.1.15), equation (4.1.14)
can be written as (after dropping the symbol ‫)׳‬
1  2 
  u s (4.1.16)
Pe x x x
 VL 
where  
  is
Pe the Peclet number which gives the ratio of the
convection and diffusion coefficients.

• Large values of Peclet represent the dominance of convection


process and small values of Pe represent the dominance of the
diffusion process.

• In general, Eq. (4.1.16) is elliptic but behaves like an hyperbolic


equation for large values of Pe.

• The corresponding transient equation is parabolic in its behavior.


17
Discretization

• There are three steps in numerically solving the differential


equations:

1. Discretization of the domain by placing a large number of


nodes with help of grid generation techniques.

2. Approximating (or discretizing) the governing equations at the


nodes identified in Step 1.

3. Solving the algebraic equations obtained in Step 2 using direct


or iterative methods.

18
• For regular domains placing the nodal points uniformly and
connecting them by straight lines gives the required grid.

• For example, the discretization of a one dimensional domain that


is, an interval, can be realized as follows:

19
Fig. 4.2.1 Discretization of an interval
• In this discretization, a set of uniformly distributed points x0,
x1,….xn are identified such that x0 = a, xn = b and xi – xi-1 = ∆ x
for i = 1,2,…n. where, ∆x is the step length.

• A uniform grid means the distance between any two consecutive


points xi and xi-1 is constant. Otherwise, the discretization is
called non-uniform.

• The coordinate of any mesh point is computed using


xi  a  i * x i  0,1, 2...n (4.2.1)

• Function value u at any point xi is represented by


u  xi   ui i  0,1, 2...n

20
• If Ω is a rectangular two-dimension domain bounded by [a,b] x
[c,d], then it can be discretized as:

Fig. 4.2.2 Discretization of a rectangular domain

• The coordinates of any point p(xi,yj) are obtained using

xi  x0  i * x, i  0,1,2,..., n, y j  y0  j * y, j  0,1,2,..., m (4.2.2)

21
• Further, the dependent variable u at any point P is represented
using ui,j = u(xi, yj).

• A similar extension can be carried out for higher dimensional


domains.

• The second step is the discretization of the governing equations.


To realize this, there are several methods. In the present lecture,
the Taylor series based method is highlighted.

• Consider a function u which depends on the independent


variable x in the interval [a, b].

22
• Let the function u be sufficiently smooth (differentiable) and it
has values ui and ui + 1 at any two neighboring points, i and i + 1
respectively.

• By using Taylor series expansion, ui + 1 can be expressed in terms


of ui and its higher derivatives as
x 2 x 3
ui 1  u  xi  x   ui  x ui  ui  ui ... (4.2.3)
2! 3!

where the superscript stands for a derivative with respect to x.

• From equation (4.2.3), ui can be written as


ui 1  ui x x 2
ui   ui  ui .... (4.2.4)
x 2! 3!
u 1 
 i 1 i  O ( x ) 
u
 ui  O ( x ) 23
x x
• where δ+ is the forward difference operator defined by
δ+ ui = ui+1 – ui

• Equation (4.2.4) is the first approximation to u‫ ׳‬at the node xi.


 x x2 
• In this approximation  u   u i  ....  is the error.
 2! i 3! 
 

x
• This error is called Truncation error, in which ui is the
2!
leading term.

• Since the degree of the step size is one in the leading term of the
error, (4.2.4) is a first order approximation for u‫׳‬.

24
• The order of approximation is an important concept in the
process of discretization which gives an immediate insight about
what kind of accuracy can be expected from the scheme.

• Similarly, one can also write


x 2 x 3 (4.2.5)
u  xi 1   ui 1  ui  x ui  ui  ui ....
2! 3!

' ui  ui 1 x x 2 1 
ui   
ui  
ui  .....   ui  O ( x ) (4.2.6)
x 2! 3! x

where δ- is the backward difference operator defined by δ- ui = ui


– ui-1. Eq. (4.2.5) is again first order as the degree of the step
size in the leading term of the error is also one.

25
• From (4.2.5) and (4.2.4) (subtracting (4.2.5) from (4.2.4)) one
can write

x 3 x5 v (4.2.7)


ui 1  ui 1  2 xui  ui ui  ....
3! 5!

x 2
ui 
1
x
 
ui 1  ui 1 
6
ui 
1 0
x
 ui  O (x 2 ) (4.2.8)

• Here, δ0 is called the central difference operator defined by


δ0 ui = ui+1 – ui-1.

• Equation (4.2.8) is a second order accurate approximation for u‫׳‬


as the leading term of the error has ∆x in second degree.

26
• Alternatively, adding (4.2.4) and (4.2.5) gives

x 2 x 4 iv
ui 1  ui 1  2ui  ui  ui  .... (4.2.9)
2! 4!

ui 1  2ui  ui 1 x 2 iv 1 2

ui   u   u  O (  x 2
) (4.2.10)
x x
2 i 2 i
12

• where δ2 is the central difference operator for second derivative


which is a second order accurate approximation.

27
Numerical Implementation

• The finite difference approximations (4.2.4), (4.2.6), (4.2.8), and


(4.2.10) can be used to replace the first and second order
derivative terms of a differential equation to convert it into a
difference (possibly linear algebraic) equation at every nodal
point of the interior of the domain.

• Note that the central approximation (4.2.8) is one order better


accurate than the forward (4.2.4) and backward (4.2.6)
approximations.

• To understand the implementation, consider the following


boundary value problem (BVP).
d 2u 2 du 2
  u  sin(ln x )  0, 1  x  2, u (1)  1and u (2)  2 (4.2.11)
dx 2 x dx x 2
28
• The analytical solution of the BVP Eq. (4.2.11) is
.03831 x 2
u ( x)  1.10869 x 
2
  3sin log( x)  5cos log x  (4.2.12)
x 34

• The finite difference solution of Eq. (4.2.11) is obtained by


replacing its first and second order derivative terms of with
(4.2.7) and (4.2.8), respectively to get
ui 1  2ui  ui 1 2 ui 1  ui 1 2
  O (x 2 )  ui  sin  log( xi )   0 (4.2.13)
x 2 xi x xi 2
for i  1, 2,..., n  1

• Note that at node numbers ‘0’ and ‘n’, we have boundary


conditions and the given differential equation may not be valid at
these points, therefore, at these nodes we have (from the
boundary conditions)
u0 = 1.0 and un = 2.0 (4.2.14)
29
• Equation (4.2.13) has n-1 equations in n+1 variables, therefore
adding Eq. (4.2.14) closes the system to solve for the unknowns
ui, for i = 1, 2, . . . , n-1.

• Rearranging the terms in (4.2.13) gives

   1
 1 1   2 2 u  1 
 2   i 1
u      ui 1  sin  log( xi ) 
 x xi (x)   x 2 x 2
 i
 i
  x 2 xi 
x (4.2.15)
i  1, 2,... n  1

• Equation (4.2.15) is a linear system with tri-diagonal coefficient


matrix. Solving such linear algebraic systems is the third and
final step of the numerical schemes.

30
• After solving Eq. (4.2.15) with step lengths (∆x) 1/40, 1/80 and
1/160, the absolute errors in the numerical solution are computed
and compared in the Fig. 4.2.3

Fig. 4.2.3 Comparison absolute errors with three distinct discretizations31


Higher Order Approximations

• Consider the operators:


ui  n
ui (4.3.1)
Dui  , D ui  n , Eui  ui 1
n

x x
ui  n ui
• Let Dui  , D ui  n , Eui  ui 1
n
be the first and nth order
x x
derivative and shifting operators.

• Using the operators (4.3.1) in (4.2.3) gives

1 1
Ee xD
,  D log( E )  D  log(1    ) (4.3.2)
x x

32
• Expanding the log function gives
1   (  ) 2 (  )3 (  ) 4 
Dui        ...  ui (4.3.3)
x  2 3 4 

• Similar replacements of E with backward difference operator δ-


or central difference operator δ gives
1   (  ) 2 (  )3 (  ) 4 
Dui        ...  ui (4.3.4)
x  2 3 4 

1  3 5 
Dui      ...  ui (4.3.5)
x  24 640 

where  ui  ui 1/ 2  ui 1/ 2

33
• Similarly, using the same procedure, the formulae for second
derivative can be obtained as
1  2 11 4 5 5 
     3
     ... 
x 2  12 6 
1  11 5  (4.3.6)
D 2ui  2   2   3   4   5  ... 
x  12 6 
1  2 1 4 1 6 
 2 
      ... 
x  12 90 

• Approximations (4.3.3), (4.3.4), (4.3.5) or (4.3.6) can be used to


generate higher order approximations.

• However, as we increase the order of approximation, the number


of terms in the approximation is also increase and such formulae
may not be convenient to use in the approximation of PDE at the
points particularly close to the physical boundaries. 34
Numerical Illustration
• The central difference approximation in (4.3.6) may be used
until second term to generate a fourth order approximate formula
for the Poisson equation (in two–dimensions)
 2u  2u
 2  f ( x, y )
x 2
y
• Potential flow equations, discussed in Module 2 have a similar
form.

• Consider the required fourth order approximations of the second


derivatives in x and y directions, respectively, at any typical
nodal point, say (i, j), as,
 2u 1  2 1 2   2u 1  2 1 2 
 
2  x 
1   x   i, j
u and  
2  y 
1   y   ui , j
x 2
x   12   y 2
y   12  
35
• Then the fourth order approximation of the Poisson equation
 2u  2u
 2  f ( x, y ) at any nodal point (i, j), is given by
x 2
y

 1  2 1 2  1  2 1 2  
 2   x  1   x    2   y  1   y    ui , j  f i , j
 x   12   y   12   

• Since

 2u 1  2 1 2 
 2   x 1   x   ui , j and
x 2
x   12  
 2u 1  2 1 2 
 2   y 1   y   ui , j
y 2
y   12  
36
• For the sake of simplicity assume, x = y (the following
procedure is also valid without this assumption).

• Rewrite the discretized equation as

 2  1 2   2  1 2  

 x  1    
x   y  1   y   i, j
u  x 2
fi , j
  12     12   
1 1
 1 2  1 2
• Multiplying with  1   x   1  y 
gives (after using the
 12   12 
commutative nature of the finite difference operators)

 2  1 2
1
  2 1 2  
1

  x 1   y      y 1   x    ui , j 
  12     12   
1 1
2 1 2  1 2
x 1   x  1   y  fi , j
 12   12 
37
1 1
 1 2  1 2
• Expanding the terms  1   x  ,  1  y  on the left hand side
 12   12 
using
1 2
 1 2 1 2  1 2
 1   x   1   x    x   ...
 12  12  12 
1 2
 1 2 1 2  1 2
 1   y   1   y    y   ...
 12  12  12 

• and neglecting the terms which are having the order more than
two gives

 2  1 2   2  1 2   2 1 2  1 2

 x  1   y     y  1   x   i, j
u  x  1   x  1   y  fi, j
  12     12     12   12 

38
 2 2 2 2 2 1 2 2

 x

  2
y 
12
 
x y  i, j

u   x  1
 12
 
1 2
 x   2
y  
144
 x  y  fi, j

• Neglecting the product term (since it produces terms at higher


order level) on the right hand side, the final scheme can be
written as

 2 1 2 2 2 2 
  x   y   x  y  ui , j  x  1 

2

6   12
1 2


 x   y  fi , j 
or
  
12 x2  12 y2  2 x2 y2 ui , j  x 2 12   x2   y2 f i , j 

39
• Expanding the central difference operator in the above scheme
gives
8 ui1, j  ui1, j  ui, j 1  ui, j 1   2  ui1, j 1  ui1, j 1  ui1, j 1  ui1, j 1  
(4.3.7)
40ui, j x2  8 fi, j  fi1, j  fi1, j  fi, j 1  fi, j 1 

• Since f is a known function, the right hand side of Eq. (4.3.7) is


known at each nodal point (i, j) and its surrounding points.

• Finally, the Taylor series expansion of Eq. (4.3.7) demonstrates


that it is fourth order accurate in both x and y.

40
Higher Order By Computing the Coefficients of a Stencil

• Another approach to generate higher order approximations is to


fix a suitably large stencil and then computing the coefficients
through Taylor series expansion and comparing the coefficients.
u  2u
• To comprehend this, consider the differential operator A B 2
x x
(by appropriately fixing the values of A and B, formulae for a
particular derivative also can be obtained) and equate this to
a0ui  a1ui 1  a2ui 1  a3ui  2  a4ui  2 , that is

u  2u
A  B 2  a0ui  a1ui 1  a2ui 1  a3ui  2  a4ui  2 (4.3.8)
x x

41
• Expanding the terms on the right hand side using Taylor series
and simplifying gives
u  2u ui , j
A B  (a0  a1  a2  a3  a4 )ui , j  (a1  a2  2a3  2a4 )x 
x x 2
x
x2  ui , j x3  ui , j
2 3

(a1  a2  4a3  4a4 )  (a1  a2  8a3  8a4 )  (4.3.9)


2 x 2
6 x 3

x4  ui , j x5  ui , j
4 5

(a1  a2  16a3  16a4 )  (a1  a2  32a3  32a4 )


24 x 4
120 x5

• Comparing the coefficients of Eq. (4.3.9) gives


A
a0  a1  a2  a3  a4  0,  a1  a2  2a3  2a4  ,
x (4.3.10)
2B
a1  a2  4a3  4a4  ,  a1  a2  8a3  8a4  0, a1  a2  16a3  16a4  0
x 2

42
Finite Difference Schemes for Elliptic Equations

• Consider a two-dimensional Poisson equation given by

 2u  2u
  f ( x , y ), ( x , y )     2
(4.4.1)
x 2 y 2

• First, generate the grid by discretizing the domain Ω with step


lengths ∆x and ∆y in x and y directions, respectively (for an
example in a rectangular domain, refer Fig. 4.2.2).

• Then at each grid point of the domain, discretize the given Eq.
(4.4.1) by replacing the partial derivatives of the equation with
second order finite difference approximations (4.2.10) to get

43
ui 1, j  2ui, j  ui 1, j 2 ui, j 1  2ui, j  ui, j 1
 O ( x )   O ( y 2 )  f i , j
x 2 y 2 (4.4.2)
for i  1, 2,..., n x  1, j  1, 2,..., n y  1

where nx and ny are the number of grid points used to discretize


the domain in x and y directions, respectively and fi,j is the
source function at the point (xi, yj).

• Equation (4.4.2) is a closed algebraic system, if Eq. (4.4.1) is


supplemented with Dirichlet boundary conditions. On the other
hand if the given boundary conditions are mixed type (Neumann
or Robin) like
 u 
a    bu  c (4.4.3)
 n 
where ‘n’ is the normal to the boundary, then to close the system

44
• The derivative terms in Eq. (4.4.3) are also approximated using
the first order accurate forward or backward difference
approximations at every grid point of the boundary, that is, at i =
0 or nx or j = 0 or ny.

• Particularly, if the boundary condition Eq. (4.4.3) is given at i =


0 or at j = 0 then the forward difference approximation and if it
is given at i = nx or at j = ny then backward difference
approximations maybe used for the discretization.

• Since the forward or backward difference approximations are


only first order accurate, the overall accuracy of the problem is
considered as first order though the governing equation is
approximated to the second order accuracy.

45
• Finally, the algebraically closed system is solved using any
linear solver

• Alternatively, to raise the overall order of accuracy to two,


– Insert ghost nodes as shown in the Fig. 4.4.1 outside the
domain

– Discretise the governing equation also at the boundary points


over which the derivative boundary conditions are prescribed.

Fig. 4.4.1 : Discretization with ghost points


46
• Then eliminate the data points at these ghost points using the
equations obtained by approximating the derivative boundary
conditions using second order central difference approximations.

• Solver for (4.4.2) : Due to the sparseness of the coefficient


matrix generated through (4.4.2) (note that only five diagonals of
this matrix, whatever the size of the system, has non-zero
entries) using any direct method unnecessarily increases the
number of computations.

• Further, the iterative methods like Jacobi, Gauss Siedel are too
slow in their convergence, therefore, alternatively one can apply
the ADI (Alternate Direction Implicit) method which is based on
line-Gauss Seidel solver.

47
Implementation of the ADI method

• In the first step, on each vertical line , the tri diagonal system
 
1 k 1  1 1  k 1 1 k 1 1
ui 1, j  2    ui, j  ui 1, j  fi , j  uik, j 1  uik, j 1 y
x 2  2 y2  x 2 y2
 x 
for i  1, 2,..., n x  1

is solved (ny-1 times) using Thomas algorithm (refer Module 5)

• In the second step, on each horizontal line

 
1 k  1  1 1  k 1 1 k 1 1
ui , j 1  2    ui , j  ui , j 1  fi , j  uik1, j  uik1, j
y 2  x 2 y 2  y 2 x 2
 
for j  1, 2,..., n y  1
is solved (nx-1 times) once again using Thomas algorithm

• The procedure is repeated until the convergence after starting the


iterative process with iteration number as 0.
48
Numerical Illustration

• Consider the two-dimensional heat flow, governed by


 2
u  2
u
u 2  2 0
2
(4.4.5)
x y
where  is the Laplace operator, in a rectangular duct. The duct
2

surfaces are considered to be perfectly insulated. The lengths are


(x,y) = (4, 3) with boundary conditions and discretization as
shown in the Fig. 4.4.2.

• The objective is to find the temperature distribution on the


surface of the sheet.

49
Figure 4.4.2 Domain and boundary conditions

Solution:

With the discretization of the domain as given in the Fig. 4.4.2,


the step lengths are 24 , 13 in x and y directions, respectively.

50
• The discretization of the given Laplace equation using second
order central difference approximations is given by
ui 1, j  2ui , j  ui 1, j ui , j 1  2ui , j  ui , j 1
  0 ( where i 1, 2,3 & j  1, 2) (4.4.6)
x 2
y 2

4  ui 1, j  2ui , j  ui 1, j   9  ui , j 1  2ui , j  ui , j 1   0 ( where i  1, 2, 3 & j  1, 2) (4.4.7)

• Solving the six equations in the (4.4.7), using any direct method,
gives
u1,1  1.1719  u1,2
u2,1  4.9805  u2,2
u3,1  19.9954  u3,2

51
Note:
1. If Gauss-Seidel iterative method with stopping criteria
( k +1) -5
u -u (k )
£ 10
¥

where ‘k’ is the iteration number, and the initial approximation


to the solution as zero are used then the scheme converges in 85
iterations.

2. If the Gauss-Seidel method is replaced with SOR (with


relaxation parameter 1.1), then the scheme converges in 55
iterations for the same conditions used in the above.

3. The analytical solution of the problem is 0.7586, 3.8068,


18.3388.
52
Note:

4. The percentage errors in the obtained numerical solution are


54%, 30%, 9% which are very high.

5. If the step lengths are changed to 2/40 and 1/30 in x and y


directions, respectively then the solution is improved to 0.7620,
3.8182, 18.3531 and the percentage errors are then reduced to
0.5%, 0.3%, 0.078%.

53
Laplace Equation in Circular Geometries

• Consider the steady diffusion problem over a thin circular plate


which is governed by Laplace equation in polar coordinates
given by,

 2u 1 u 1  2u
  2  0 0  r  ra , 0    2 (4.4.8)
r 2
r r r  2

where r and θ are radial and angular coordinates.

54
• If Dirichlet conditions are given at r = 0 and ra, and periodic
conditions are used in angular direction then the second order
central difference approximation of (4.38) gives
1
dr 2  ui 1, j  2ui, j  ui 1, j  
1
ri dr
 ui 1, j  ui 1, j  
1
ri d
2 2  i , j 1
u  2ui, j  ui, j1   0 (4.4.9)

where dr and dθ are step lengths in radial and angular


directions, respectively.

• Varying i and j in Eq. (4.4.9) gives a linear system which can be


solved for the solution of Eq. (4.4.8).

55
• In the absence of any boundary condition at r = 0, discretization
of Eq. (4.4.8) in the conventional way leads to one over zero in
second and third terms of Eq. (4.4.9). In such cases, u at r = 0 is
obtained using

 1 n 
u0    ui  (4.4.10)
 n  1 i 0  r  ra

• To obtain Eq. (4.4.10), discretize the Cartesian equivalent of Eq.


(4.4.8) at the center of the circle with radius ra and take the mean
after repeating the same on the (nθ+1 times) rotated stencil.

56
Difference Schemes for Parabolic Equations

One-dimensional problems:

• Consider the unsteady diffusion problem (parabolic in nature) in


a thin wire governed by the differential equation

u  2u
 k 2 , x (a, b), t  0 (4.5.1)
t x

• Assume that the initial conditions, the distribution of u at t = 0


and the boundary conditions, u at x = a and b are given.

57
Forward time central space (FTCS) scheme
• A simple and easiest scheme to compute the numerical solution
of (4.5.1) is the FTCS (forward time and central space) scheme
which is an explicit method.

• An explicit scheme uses a stencil in which only one unknown is


written in terms of the remaining known values at other stencil
points. The FTCS approximation of Eq. (4.5.1) is


1 n 1 n
t
1
 
ui  ui  k 2 uin1  2uin  uin1
x

(4.5.2)
t
 
uin 1  uin  r uin1  2uin  uin1 , r  k
x 2
uin 1  ruin1  (1  2r )uin  ruin1 i  1, 2, , nx  1, n  0,1,
where, the superscript ‘n’ represents the time level. The
discretization and the stencil of the FTCS Eq. (4.5.2) is shown in
the Fig. 4.5.1. 58
Fig. 4.5.1 Discretization and the stencil of FTCS scheme

• In the Fig. 4.5.1, there is only one stencil point at n+1th time
level which is the unknown and three points at nth time level
which anyway are known.

• Therefore using Eq. (4.5.2), one unknown at a time at n+1 time


level can be computed by varying i = 1, 2, . . ., nx-1.
59
• Taylor series expansion of Eq. (4.5.2) demonstrates that the
FTCS scheme is first order accurate in time and second order in
space.

Backward time central space (BTCS) scheme

• The BTCS approximation of Eq. (4.5.1) gives

t

1 n n1

1

ui  ui  k 2 uin1  2uin  uin1
x
 (4.5.3)
t
ruin1  (1  2r )uin  ruin1  uin1 , r  k , i  1, 2,, nx 1, n  1, 2,
x2

• Equation (4.5.3) is an implicit scheme which has more than one


unknown at nth time level, therefore one equation alone can’t be
solved unless it is clubbed with more number of equations to
close the system.

60
• This is done by grouping all the discretized equations at a
particular time level and solving them in one step using say,
Thomas algorithm if the resultant system is a tri-diagonal one.

• The same is repeated by incrementing the value of ‘n’ until the


required time level is reached.

• This type of procedure is called a time marching scheme.

61
• The computational stencil for the BTCS scheme is as shown in
Fig. 4.5.2.

Fig

Fig. 4.5.2 Computational stencil for BTCS scheme

• At each time step, the scheme Eq. (4.5.3) also can be written as

t (4.5.4)
 ruin11  (1  2r )uin 1  ruin11  uin , r  k , i  1, 2, , nx  1, n  0,1,
x 2

62
Weighted average scheme
• Weighted average of Eqs. (4.5.1) and (4.5.3), gives

1 n 1 n
t
 k
   
ui  ui  2 (1   ) uin1  2uin  uin1   uin11  2uin 1  uin11
x
 (4.5.4)
 r uin11  (1  2 r )uin 1   r uin11  (1   )r uin1  (1  2(1   )r )uin  (1   )r uin1
t
rk , 0    1, i  1, 2, , nx , n  0,1,
x 2

• For θ equals to 0 and 1, Eq. (4.5.4) gives FTCS and BTCS


schemes, respectively. The Taylor series expansion of (4.5.4)
gives
u u 2
1  u t  u k x  u
2 n
k  t x  u
2 3 n 2 4 n 2 2 4 n
k  t       i
 i
  (4.5.4)
i i
t x 2
2 t
 2 t  12 x
2
2 t x
3 4 2 2

• Therefore, weighted average scheme Eq. (4.5.4) is second order


1
 
accurate in space and first order in time if 2 . The scheme is
also second order accurate in time if   12 . The weighted average
scheme with   12 is known as Crank-Nicolson scheme. 63
Numerical Illustration
u  2u
• Consider  , x (0,1), t  0 with
initial conditions u(x,0) = sin
t x 2
x and boundary conditions zero at x = 0 and 1.

• Use step sizes 0.2 and 0.012 in x and t directions, respectively,

• Compare, after ten time steps, the numerical solutions obtained


with FTCS, BTCS and Crank-Nicolson schemes with the
analytical solution e t sin  x .
2

• For the step size 0.2, we have


k t 1* 0.012
r   0.3
x 2
0.2 *0.2
• With r = 0.3, the FTCS, BTCS and Crank-Nicolson schemes are
given by
64
uin 1  0.3 uin1  0.4 uin  0.3 uin1
0.3uin11  1.6uin 1  0.3uin11  uin
n 1 n 1 n 1 (4.5.6)
0.15 u i 1  1.3u
i  0.15 ui 1  0.15 u
n
i 1  0.7u  0.15 u
n
i
n
i 1

for i  1, 2,3, 4, n  0,1, 2, ,9

• Equation (4.5.6) depends only on the value of r and is


independent of the step sizes.

• The solution and the percentage errors generated by the three


schemes (FTCS, BTCS and Crank_Nicolson), after marching 10
times in the time direction using Eq. (4.5.6), are compared in the
Table 4.5.1.

65
FTCS BTCS Crank-Nicolson
X Analytical
Solution Error Solution Error Solution Error

0.2 0.1798 0.1740 3.22% 0.1986 10.46% 0.1866 3.79%

0.4 0.2910 0.2816 3.22% 0.3214 10.46% 0.3020 3.79%

Table 4.5.1 Comparison of the analytical and numerical solutions and their errors

• The initial and boundary conditions in the above computations


are taken from the exact solution.

66
Two-dimensional problems
• Consider the unsteady diffusion over a flat plate governed by the
differential equation
u  u  u 2 2
(4.5.7)
k  , ( x, y ) (a, b) X (c, d ), t  0
 2 
t  x y 2 

Assume that the initial and boundary conditions on u are known.

Explicit scheme
• Approximating the time derivative with forward difference and
space derivatives with central differences gives a scheme
 1 
1
t
 u u   k
n 1
i, j
x
n
u  2u  u  
i, j
1
y
2
u  2u  u  
n
i 1, j
n
i, j
n
i 1, j 2
n
i , j 1
n
i, j
n
i , j 1
 
uin, j 1  uin, j  r1  uin1, j  2uin, j  uin1, j   r2  uin, j 1  2uin, j  uin, j 1 
t t
(4.5.8)
r1  k 2 , r2  k 2
x y
uin, j 1  r2uin, j 1  ru
1 i 1, j  (1  2 r1  2 r2 )ui , j  ru
n n
1 i 1, j  r2 ui , j 1
n n

67
i  1, 2, , n x  1, j  1, 2, , n y  1, n  0,1,
• Here, ∆y is the step length in y direction. Taylor series
expansion of Eq. (4.5.8) shows that the explicit scheme is first
order accurate in time and second order in space (both in x and y
directions).

• Weighted average or Crank-Nicolson type of approximation to


(4.5.7) gives a penta-diagonal system like Eq. (4.4.2) at the n+1th
time level, solving such a system is very expensive
computationally, therefore, alternatively ADI method can be
developed as follows:
n
1
 n  12 n
1
n
1

ui , j  u
2 n
 u  2 u 2
 u 2
u n
 2 u n
 u n
i , j 1 
i, j
 k  i 1, j i, j i 1, j
 i , j 1 i, j

t / 2   x 2
 y 2

  (4.5.9)
n
1
 n  12 n
1
n
1
n 1 
uin, j 1  ui , j 2 n 1 n 1
 u  2ui , j  ui 1, j ui , j 1  2ui , j  ui , j 1 
2 2
 k  i 1, j  
t / 2  x 2
 y 2

 
68
• First on j is constant lines, using the first tri-diagonal part of Eq.
(4.5.9), solution at n+1/2 time level is obtained.

• In the second step, using the solution at the n+1/2 time level over
the i is constant lines and using second tri-diagonal part of Eq.
(4.5.9), the solution is marched to the n+1 time level.

• Therefore, for each time level, Eq. (4.5.9) gives (nx-1 + ny-1) tri-
diagonal systems in x and y directions which are mush easier to
solve using Thomas algorithm than the penta-diagonal system
(of size L X L, where L = (nx-1) * (ny-1)) appears in the
weighted average or Crank-Nicolson schemes for two
dimensional problems.

69
Convergence

• Here, convergence means, the convergence of the numerical


solution to the analytical solution

• For parabolic and also for all time dependent problems, the
convergence of the numerical solution to the corresponding
analytical solution is carried out through the testing for
consistency and stability since, according to Lax equivalence
theorem,

• Consistency and stability are necessary and sufficient conditions


for convergence of the finite difference solutions of any time
dependent problem.

70
Consistency of a Numerical Scheme

• Under the limiting case of step lengths tending to zero, if a finite


difference scheme converge to the corresponding differential
equation then such a scheme is called consistent.

• Mathematically, it is tested by looking at the truncation error of


the scheme as the step lengths tend to zero.

• If the truncation error tends to zero as the step lengths tend zero
then the numerical scheme is said to be consistent.

71
Numerical Illustration

• Consider the Taylor series expansion Eq. (4.5.5) of the Weighted


average scheme Eq. (4.5.4)
ruin11  (1 2r)uin1 ruin11  (1 )ruin1  (1 2(1 )r)uin  (1 )ruin1

given by
u  2u 1   ui t  ui k x  ui k  t x  ui
2 n 2 3 n 2 4 n 2 2 4 n
 k 2  t     2    
t x 2  t 2 t 3 12 x 4 2 t 2x 2

• Taking step lengths ∆x and ∆t tending to zero, the series


u  2u
expansion converges to the governing equation  k 2,
t x
therefore, the weighted average scheme is consistent.

72
Stability of a Numerical Scheme

• Stability of a numerical scheme deals with the growth of the


rounding errors during the time marching process.

• Let us illustrate, the stability through the following observation

• Repeat the computations of Eq. (4.5.6) once again with time


steps 0.018 and 0.022 (that is, for the value of r as 0.45 and
0.55, respectively) for FTCS and Crank-Nicolson (CN) schemes

• The corresponding solution with r = .45 and .55 are presented in


Tables 4.5.2 and 4.5.3, respectively

73
FTCS Crank-Nicolson
X Analytical
Solution Error Solution Error

0.2 0.3040 0.3045 0.17% 0.3070 1.00%

0.4 0.4954 0.4947 0.14% 0.5016 1.26%

Table 4.5.2 Comparison of the solution and errors with r = 0.45.

FTCS Crank-Nicolson
X Analytical
Solution Error Solution Error

0.2 0.2762 0.2301 16.35% 0.2794 1.17%

0.4 0.4484 0.3700 17.49% 0.4542 1.30%

Table 4.5.3 Comparison of the solution and errors with r = 0.55.

74
• Its clear from the Tables 4.5.2 and 4.5.3 that, with r = 0.45, the FTCS
scheme produces solutions with errors less than .2% and CN scheme
with errors less than 1.3%.

• However, if the value of r is increased to 0.55, the CN scheme


continues to give solutions with similar errors while the errors in
FTCS scheme increased by many folds.

• The behavior of increasing errors with FTCS becomes worse and


completely dominated by these errors if we still continue the
computations for higher time levels.

• This is due to the instable nature of the FTCS scheme when the value
of r is greater than 0.5. Mathematically this can be understood using
the following analysis:

75
• Let £(u) = 0 be a linear difference scheme and uin is its numerical
solution, U in is the exact solution and Ein is its error at nth time level
at the nodal point xi then we have
uin  U in  Ein (4.5.10)
and
£( ui ) = £( U in  Ein ) = £( U in ) + £( Ei ) = £( Ei ) = 0
n n n
(4.5.11)

• That is, the error also satisfies the same difference equation
which numerical solution satisfies.

• Therefore, one can study the behavior of the error by studying


the numerical solution itself.

76
• Further, if the numerical solution is assumed to be periodic,
achieved by reflecting the solution in the region (0, L) in to (-L,
L) and expressible in terms of finite Fourier series (since the
domain is of finite length which is discretized with finite step
length) then it can be written as
N N N
ui   K j e
n n Ik x
j i
  Kje
n Ik ix
j
  K nj e Ii (4.5.12)
j  N j  N j  N

where
N is the number of points in the discretization, (N=L/∆x)
I  1
K nj is the amplitude of the jth harmonic,
kj is the wave number
 is the phase angle given by kj ∆x.

77
Note: kj varies from –N to N instead of - to , because the
maximum and minimum resolvable wavelengths (λ) are only 2L
and 2∆x, respectively and the maximum wavelength 2L is
discretized with 2N+1 points, that is from –N to N.

• In the actual computation, due to the linear nature of the


difference scheme, it is enough to use one Fourier mode, instead
of (4.5.12) and looking at the raise or damping of the
amplification factor G which is defined as the ratio of the
amplitude at n+1 and nth time levels, that G is defined as
Kin 1 (4.5.13)
G
Kin
Any scheme (linear) is said to be stable if
|G| < 1 (4.5.1)
Otherwise is said to be unstable.
78
• Numerical Example: Discuss the stability criteria of the
weighted average scheme
 r  u  (1  2 r )u   r u  (1   ) r u  (1  2(1   ) r )u  (1   ) r u
n 1
i 1
n 1
i
n 1
i 1
n
i 1
n
i (4.5.15) n
i 1

n Ii
Substituting u  K e in Eq.
n
i (4.5.15) and simplifying for
amplification factor G, gives

K n 1 I
(1  ) re  (1  2(1  ) r )  (1  ) re I 1  4(1  ) r sin 2
G   2
Kn re  I   (1  2r )  re I  1  4r sin 2
 (4.5.16)
2

• For θ = 0, that is, for FTCS scheme, |G|<1 implies 1  4 r sin 2
2
1
which is true whenever the value of r < 1/2.

79
• Therefore, FTCS scheme is only conditionally stable. However,
for θ greater or equal to ½, Eq. (4.5.16) is unconditionally stable.

• Once again looking at the Tables 4.5.2 and 4.5.3, it is clear that
for r = 0.45, FTCS scheme is able to produce accurate solutions
but the round of errors are dominated when the value of r is
raised to 0.55 because FTCS scheme is not stable at r = 0.55.

• On the other hand due its stable nature of CN scheme for all
values of r, the round of errors are under control at both r = 0.45
and 0.55.

80
Difference schemes for hyperbolic equations

• Consider the one dimensional wave equation


 2u  2
u
 c 2
, x  (0, L), t  0
t 2
x 2 (4.6.1)

where c is a positive constant.

• Equation (4.6.1) requires two initial conditions at t=0


u(x,0) = f(x), ut(x,0) = g(x) (4.6.2)

and two boundary conditions at x=0 and L given by


u(0,t) = u(L,t) = 0 (4.6.3)

81
Explicit Scheme
• Discretizing the second derivatives in Eq. (4.6.1) with central
differences gives

uin 1  2uin  uin 1 2 ui 1  2ui  ui 1


n n n
(4.6.4)
 O ( t 2
)  c  O ( x 2
)
t 2
x 2

i  1, 2,3,, nx  1, n  1, 2,3,

uin 1    uin 1  2uin   2  uin1  2uin  uin1   O ( t 2 , x 2 )


  uin 1  2uin1  2(1  2 )uin  2uin1  O ( t 2 , x 2 ) (4.6.5)
ct
 , i  1, 2,3,, nx  1, n  1, 2,3,
x

• For n=1, Eq. (4.6.5) requires the solution at t = -∆t, which can be
obtained, using the derivative boundary condition at t=0 Eq.
(4.6.2). 82
• Discretizing the initial condition: At n=1, Discretising the
derivative initial condition with central difference gives

ui1  ui1
 O ( t 2 )  g ( xi )  ui1  ui1  2 tgi  O ( t 2 ) (4.6.6)
2t

• Eqs. (4.6.5) with (4.6.6) at n=1, completes the discretization and


gives a second order solution in both space and time.

83
• Alternatively, using Taylor series at n=1, one can write
t 2 t 3
u ( xi , t1 )  u ( xi , 0  t )  u ( xi , 0)  tut ( xi , 0)  utt ( xi , 0)  uttt ( xi , 0)  
2! 3! (4.6.7)
t 2 f i 1  2 f i  f i 1
 f i  tgi   O ( x 2
, t 3
) using(4.6.2)
2! x 2

• Therefore, Eq. (4.6.7) may be used to compute the solution at


t=∆t and as usual Eq. (4.6.5) at all later time levels.

• Since, from the initial condition Eq. (4.6.2), the function f is


known explicitly, the second partial derivative of f with respect
to x in Eq. (4.6.7) can be replaced with exact partial derivative.

84
Consistency and Stability of Eq. (4.6.7)

• Taylor series expansion of Eq. (4.6.5) gives


 2u  2
u 1   4
u  4
u
 c 2
  t 2
 c 2
x 2
4 
 O ( t 4
, x 4
) (4.6.8)
t 2
x 2
12  t 4
x 

• Equation (4.6.5) converges to the governing Eq. (4.6.3) if the


step lengths ∆x, ∆t tending zero, therefore the scheme Eq. (4.6.5)
is consistent.

• Equation (4.6.8) also indicates that the scheme Eq. (4.6.5) is


second order in both t and x.

85
• For verifying stability, uin  K n e Ii in Eq. (4.6.5) and simplify to get

K n 1e Ii   K n 1e Ii  2 K n e I ( i 1)   2(1  2 ) K n e I ( i )   2 K n e I ( i 1) 

• Dividing with K n e Ii gives

K  K1  2eI (1)  2(1 2 )  2KneI (1)  K1  2 2cos  2(1 2 )
 2 2 
K    2(1 2sin )  2(1 2 )  K 1  0
2

 2  (4.6.9)
 2   2 
K  21 2 sin  K 1  0  K  2AK 1  0 with A  1  2 sin 
2 2 2 2

 2  2
• Computing K from Eq. (4.6.9) gives,

K  A  A2  1  A  i 1  A2 , provided 1 > A2 (4.6.10)

86
• Equation (4.6.10) guarantees |K|=1 which is better than |K|>1
because in the former, at least the error only grows in accordance
with the solution instead of blowing up in the later situation.
However, Eq. (4.6.10) is valid only when
 
A2  1  1  A  1  1  1  22 sin 2  1  2 sin 2  1 (4.6.11)
2 2

Finally, Eq. (4.6.11) is true whenever


c t (4.6.12)
 1
x
• Equation (4.6.12) is the required condition for the explicit
scheme Eq. (4.6.5), to the wave Eq. (4.6.3), to be stable.

87
CFL ( Courant – Friedrichs – Lewy ) Condition

• It was stated during the classification of the PDE that, the


hyperbolic equations have a domain of dependence.

• To illustrate this, consider the D’Alembert’s solution of the wave


Eq. (4.6.3)-(4.6.4), with x (-,), given by
x  ct
1 1
u( x, t )   f ( x  ct )  f ( x  ct )    g (v )dv (4.6.13)
2 2c x ct

• Equation (4.6.13) demonstrates that the solution of the wave


equation at any point (x, t) depends only in the triangular region
bounded by the left running characteristic, the right running
characteristic and the x-axis.

88
• Therefore, any numerical scheme which is developed to solve
Eq. (4.6.3) should also use all the data from this region.

• The condition which guarantees such a criteria is called CFL


condition.

• Since, Eq. (4.6.13) uses f and g in the region (xi-atn, xi+atn) on the
x-axis to compute the solution at any given point (xi, tn),
therefore, according to CFL condition (the numerical domain of
dependence should bound the analytical domain of dependence)
that is
c t
xi  n  xi  ctn  xi  nx  xi  cnt  x  ct   1 (4.6.14)
x

89
• Condition Eq. (4.6.14) is equivalent to the stability condition σ ≤
1 for the explicit scheme Eq. (4.6.5).

• Further, if c=1 and the step lengths ∆t=∆x, then from Eq. (4.6.8),
the scheme Eq. (4.6.5) is fourth order accurate.

• Therefore, if the step lengths are fixed such that σ = 1, then the
explicit scheme Eq. (4.6.5) gives the best solution (in this
particular case, that is, for c=1 and ∆t=∆x, the entire right hand
side of Eq. (4.6.8) becomes zero and Eq. (4.6.5) produces exact
solution).

90
Numerical Illustration

• Compute the solution of the following wave equation using the


explicit scheme Eq. (4.6.5) for 10 time steps.
 2u  2u
 2 , 0  x  1, t  0
t 2
x
(4.6.15)
u
u( x,0)  sin( x ), ( x,0)  0, 0  x  1
t
u(0, t )  u(1, t )  0, t  0
• The exact solution of the problem Eq. (4.6.15) is

sin(x) cos(t) (4.6.16)

91
• Fixing ∆x = 0.1 and ∆t = 0.075 (so that σ = 0.75), scheme Eq.
(4.6.5), after 10 time steps (that is at time t = .75), produces the a
numerical solution which has been compared with the analytical
solution in the Table 4.6.1,

• The percentage error is 0.38%.

• The numerical solution in the Table 4.6.1 is generated using the


exact solution at the time step ∆t.

• In the computations if ∆t is changed to 0.1 (σ = 1), as discussed


earlier, the explicit scheme has produced the exact solution
(except for some machine error at 10-15).

92
x=0.1 x=0.2 x=0.3 x=0.4 x=0.5

Exact Sol. -0.2185 -0.4156 -0.5721 -0.6725 -0.7071

Num. Sol. -0.2177 -0.4140 -0.5699 -0.6699 -0.7044

Table 4.6.1 Comparison of exact and numerical solutions with σ = 0.75

• In the above computations, if ∆t is fixed such that σ = 1.25 then


after 30 time steps the explicit scheme produces a numerical
scheme with error more than 500% because with σ = 1.25, the
scheme violates both stability and CFL conditions.

93

You might also like