Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
34 views22 pages

Theme 2 Part I

The document outlines different numerical methods for solving initial value problems for ordinary differential equations. It introduces one-step methods like Taylor and Runge-Kutta methods and also discusses multistep methods. It provides details on the local truncation error and stability of these methods.

Uploaded by

Ivan Wille
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views22 pages

Theme 2 Part I

The document outlines different numerical methods for solving initial value problems for ordinary differential equations. It introduces one-step methods like Taylor and Runge-Kutta methods and also discusses multistep methods. It provides details on the local truncation error and stability of these methods.

Uploaded by

Ivan Wille
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Outline

1. Source

2. Introduction

3. Onestep methods

4. Onestep method: Taylor and Runge-Kutta methods

5. Multistep methods
References
RJL Chapter: 5.0
Notes
▶ Differential equations are often divided into two classes,
ODEs and PDEs, according to the number of independent
variables, and are usually studied separately.
▶ A more meaningful division, however, is between initial
value problems (IVP), which usually model time-dependent
phenomena, and boundary value problems (BVP), which
generally model steady-state systems
▶ We begin with a simple IVP for a time-dependent ODE,

u ′ = f (u, t), u(t0 ) = u0 , t > t0

▶ The basic existence and uniqueness theorem1 states that if


f is Lipschitz continuous over some region D then there
is a unique solution to the IVP.

1
see Burden et al., Numerical analysis, 10E, Cengage Learning.
Notes
▶ The IVP differs from the BVP in that we are given all the
data at the initial time, t0 = 0 say, and we should march
forward in time, computing approximations at successive
times t1 , t2 , · · · .
▶ Given v0 = u0 , we want to compute approximations v1 , v2 , · · ·
satisfying vn = u(tn ).
▶ We will use k to denote the time step, i.e., tn = nk for
n ≥ 0, and k = tn+1 − tn for a uniform grid.
▶ In some cases, the notation ∆t is often used...

Remark 1
Analysis of schemes for boundary value problems will be
considered in Theme 5, under elliptic partial differential
equations.
One-step method: general framework
▶ Any one-step method for approximating the solution, y (t),
of the initial-value problem

u ′ = f (u, t), for a ≤ t ≤ b, with u(t0 ) = u0

▶ can be expressed in the form

vn+1 = vn + kϕ(tn , vn , k), for n = 0, 1, 2, · · · , N − 1

for some function ϕ

Desirable property
An ideal difference-equation method would have the property
that, given a tolerance ε > 0, a minimal number of mesh points
could be used to ensure that the global error, |u(tn ) − vn |, did
not exceed ε for any n = 0, 1, · · · , N − 1.
Recall: u ′ = f (u, t), u(t0 ) = u0 , t > t0
1. Forward Euler scheme
1
δ + vn = [vn+1 − vn ] = f (vn , tn ), n = 0, 1, ...
k
2. Backward Euler scheme
1
δ − vn = [vn − vn−1 ] = f (vn , tn ), n = 0, 1, ...
k
▶ The method is called one-step methods because the
approximation for the mesh point tn+1 involves information
from only one of the previous mesh points, tn .
▶ Although these methods might use function evaluation
information at points between tn and tn+1 , they do not
retain that information for direct use in future
approximations.
▶ All information used is obtained within the subinterval
over which the solution is being approximated.
Definition 1
The truncation error is the amount by which the solution of
the differential equation fails to satisfy the approximate
(finite difference) equation, in normalized form.2
Example 1
The local truncation error of the trapezoidal method is

u(tn+1 ) − u(tn ) 1
τn = − [f (u(tn )) + f (u(tn+1 ))]
k 2

Definition 2
An approximate (finite difference) method is consistent with
the differential equation if the truncation error goes to zero
as the step size k goes to zero.
Definition 3
An approximate finite difference method is stable is the error
goes to zero as the truncation error goes to zero.
2
Normalized means in a form such that the terms in the difference
equation approximate the corresponding terms in the differential equation.
▶ Consider a one-step method: we now study the convergence
of the sequence {vn }0≤n≤N ⊂ Rn to the solution u(t),
t ∈ [a, b].
▶ The simplest method we consider is the one-step Euler’s
method given by
vn+1 = vn + kf (vn , tn ), v0 = u(t0 ), n = 0, 1, 2, · · ·

▶ To derive an error bound for the Euler’s method, we need


the following 2 computational lemmas.
Lemma 1
For all x ≥ −1 and any positive constant m, we have
0 ≤ (1 + x)m ≤ e mx .
Lemma 2
Suppose that there exist positive constants k, L and A such
that the members of the sequence e0 , e1 , · · · , satisfy
en+1 ≤ en (1 + kL) + A, n = 0, 1, 2, · · · , then
A nkL
en ≤ e0 e nkL +

e −1 .
kL
Proof of Lemma 2.
By recursively applying en+1 ≤ en (1 + kL) + A, we have

en ≤ (1 + kL)n e0 + A 1 + (1 + kL) + (1 + kL)2 + · · · + (1 + kL)n−1


 

n−1
X
= (1 + kL)n e0 + A (1 + kL)j
j=0
(1 + kL)n − 1
= (1 + kL)n e0 + A
kL
e nkL − 1
≤ e nkL e0 + A
kL
We have used the fact that (1 + kL)n ≤ e nkL using Lemma 1.
Theorem 1
Suppose that f (u, t) is a continuous vector valued function on
D,
D = {(u, t)|t ∈ [a, b], u ∈ Rn }
and satisfy a Lipschitz condition in the variable u over D
with a Lipschitz constant L. Suppose that u ′′ (t) exists and is
continuous for t ∈ [a, b] and that

max u ′′ (t) ∞
= M,
t∈[a,b]

where u(t) is a unique solution of the IVP. Let {vn }N


n=1 be the
discrete solution generated by the Euler method. Then we have
the following error estimate.
M L(b−a)
max ∥vn − u(tn )∥ ≤ k [e − 1]
t∈[a,b] 2L

Proof.
Exercise: See Assignment 2.
Proof.
The discretisation error at step n is given by en = vn − u(tn ).
Notice that v0 = u(t0 ) so that e0 = 0. Taylor’s theorem gives

u ′′ (ξn )
u(tn+1 ) = u(tn ) + kf (u(tn ), tn ) + k 2 , ξn ∈ [tn , tn+1 ]
2
Subtracting the above equation from the Euler scheme

u ′′ (ξn )
en+1 = vn − u(tn ) + k(f (vn , tn ) − f (u(tn ), tn )) − k 2
2
u ′′ (ξn )
= en + k(f (vn , tn ) − f (u(tn ), tn )) − k 2
2
Taking the norm of each side of the equation, gives

k2 k2
∥en+1 ∥ ≤ ∥en ∥ + k ∥f (vn , tn ) − f (u(tn ), tn )∥ + M ≤ (1 + kL) ∥en ∥ + M
2 2
using the Lipschitz condition of f and ∥e0 ∥ = 0. Result
follows from the Lemma 2.
Exercise 1
Consider the simple scalar linear initial value problem

u ′ = λu + g (t), u(t0 ) = u0 ,

where λ is a nonnegative constant.


1. Show that the forward Euler scheme satisfies
vn+1 = (1 + kλ)vn + kg (t).
2. Find the truncation error, τn .
3. Show that the global error satisfies en+1 = (1 + kλ)en − kτn .
4. Conclude that the method converges and it is first order
accurate.
▶ The idea is to determine accurate techniques with minimal
effort: and we focus on the local truncation error of the
method.
▶ The local truncation error is the measure of the amount by
which the exact solution of the DE fails to satisfy the
difference equation being used for the approximation.
▶ Without knowledge of the exact solution, the local
truncation error will serve well...
▶ Consider the initial value problem:
u ′ = f (u, t), u(t0 ) = u0 , t > t0

Taylor method or order n

v0 = u0
vn+1 = vn + kT (n) (tn , vn ) for each n = 0, 1, 2, . . .
k ′ k n−1 (n−1)
T (n) (tn , vn ) = f (tn , vn ) + f (tn , vn ) + . . . + f (tn , vn )
2 n!
Notes
▶ Polynomial approximations can also be used to achieve high
accuracy instead of using higher order finite difference
approximations.
▶ This introduces the concept of multistage methods where
intermediate values of the solution and its derivative are
generated and used within a single time step

Example 2
Consider the following 2-stage Runger-Kutta method
1
v ∗ =vn + kf (vn , tn ),
2  
∗ k
vn+1 =vn + kf v , tn +
2
▶ The disadvantage of Taylor methods, is the undesirable
property of requiring the computation of higher
derivatives of f (t, u).
▶ Runge-Kutta methods have higher order truncation error,
but eliminates the need to compute derivatives.
▶ The Euler method, Taylor methods, and Runge-Kutta methods
are called one-step methods because approximation for the
mesh point tn+1 involves information from only one of the
previous mesh points tn .

Midpoint Method

v0 = u0
 
k k
vn+1 = vn + kf tn + , vn + f (tn , vn ) for each n = 0, 1, 2, . . .
2 2
Multistep methods
▶ The approximate solution is available at each of the mesh
points t0 , t1 , · · · , tn before the approximation at tn+1 is
obtained, and because the error |wn − y (tn )| tends to
increase with n, ...
▶ so it seems reasonable to develop methods that use these
more accurate previous data when approximating the
solution at tn+1 .
▶ Methods using the approximation at more than one previous
mesh point to determine the approximation at the next
point are called multistep methods.
▶ We will now give a precise definition of these methods,
together with the definition of the two types of multistep
methods.
▶ Higher order accuracy is obtained by using multistep
methods that involve other previous values.
▶ Examples:
1. the midpoint method (leapfrog method)
1
[vn+1 − vn−1 ] = f (vn )
2k

2. the second order implicit 2-step method


1
[3vn+1 − 4vn + vn−1 ] = f (vn+1 )
2k

3. the trapezoidal method


1 1
[vn+1 − vn ] = [f (vn ) + f (vn+1 )]
k 2

▶ What are the advantages and disadvantages of one-step and


multi-step methods?
Definition 4 (m-step Method)
▶ An m-step multistep method for solving the initial-value
problem
y ′ = f (t, y ), a ≤ t ≤ b, y (a) = α

▶ has a difference equation for finding the approximation


wn+1 at the mesh point tn+1 represented by the following
equation, where m is an integer greater than 1:

wn+1 =am−1 wn + am−2 wn−1 + · · · + a0 wn+1−m


+h[bm f (tn+1 , wn+1 ) + bm−1 f (tn , wn ) + · · · + b0 f (tn+1−m , wn+1−m )]

▶ for n = m − 1, m, · · · , N − 1, where h = (b − a)/N, the


a0 , a1 , · · · , am−1 and b0 , b1 , · · · , bm are constants, and the
starting values

w0 = α, w1 = α1 , w2 = α2 , · · · , wm−1 = αm−1
Recall.

wn+1 =am−1 wn + am−2 wn−1 + · · · + a0 wn+1−m


+h[bm f (tn+1 , wn+1 ) + bm−1 f (tn , wn ) + · · · + b0 f (tn+1−m , wn+1−m )]

Explicit and Implicit Methods


▶ When bm = 0, the method is called explicit, or open,
because the difference equation then gives wn+1 explicitly
in terms of previously determined values.
▶ When bm ̸= 0, the method is called implicit, or closed,
because wn+1 occurs on both sides of the difference
equation, so wn+1 is specified only implicitly.
The 4th-order Adams-Bashforth method

w0 = α, w2 = α1 , w2 = α2 , w3 = α3

hh
wn+1 = wn + 55f (tn , wn )−59f (tn−1 , wn−1 )+37f (tn−2 , wn−2 )−9f (tn−3 , wn−
24
for each i = 3, 4, · · · , N − 1. The scheme has a LTE of O(h4 ).

The 4th-order Adams-Moulton method

w0 = α, w2 = α1 , w2 = α2

hh i
wn+1 = wn + 9f (tn+1 , wn+1 )+19f (tn , wn )−5f (tn−1 , wn−1 )+f (tn−2 , wn−2 )
24
for each i = 3, 4, · · · , N − 1. The scheme has a LTE of O(h3 ).
Starting Values & Accuracy
▶ The 4 or 3 starting values (in either explicit AB4 or
implicit AM4) must be specified, generally by assuming
w0 = α and generating the remaining values by either a
Runge-Kutta or Taylor method.
▶ The implicit methods are generally more accurate than the
explicit methods, but to apply an implicit method such as
the Adams-Moulton Method directly, we must solve the
implicit equation for wn+1 .
▶ This is not always possible, and even when it can be done
the solution for wn+1 may not be unique.
Example 3
▶ Assume that we have used the Runge-Kutta method of order 4
with h = 0.2 to approximate the solutions to the initial
value problem

y = y − t 2 + 1, 0 ≤ t ≤ 2, y (0) = 0.5

▶ The first four approximations were found to be


y (0) = w0 = 0.5, y (0.2) ≈ w1 = 0.8292933,
y (0.4) ≈ w2 = 1.2140762, and y (0.6) ≈ w3 = 1.6489220.
▶ Use these as starting values for the 4th-order
Adams-Bashforth method to compute new approximations for
y (0.8) and y (1.0), and compare these new approximations to
those produced by the Runge-Kutta method of order 4.

You might also like