Chapter 2
Numerical methods for solving initial value problems
MATH2052
Chengjie Cai
School of Mathematical Sciences
2022-23
Lecture 3 The forward Euler method
Last time, we:
learnt what numerical methods are, and why we use them;
learnt how to solve algebraic equations using the bisection method and
the Newton-Raphson method.
Today, we will:
learn how to approximate solutions to rst-order ODEs using the
forward Euler method;
evaluate how good the forward Euler method is, and consider how to
make it more accurate.
Chengjie Cai (UoN) MATH2052 2022-23 2 / 113
Motivation and set-up
As stated in the previous lectures, some DEs can't be solved exactly;
the best we can do is approximate the solution using numerical
methods.
For numerical methods to work, we must specify initial data; we can
only nd particular solutions.
A DE and IC coupled together are called an initial value problem
(IVP).
Over the next few lectures, we will look at how iterative numerical
methods for solving IVPs work...
...but for now, we will only look at examples where we can calculate
the exact solution, for the purpose of seeing how accurate the
methods are.
Chengjie Cai (UoN) MATH2052 2022-23 3 / 113
Iterative methods for IVPs
Suppose we are solving the IVP y 0 = f (x; y ), with y (x0 ) = y0 .
Set up equally spaced mesh points x0 < x1 < x2 < : : : such that
where the mesh size h > 0 is a xed number, usually small.
A single-step (explicit) iterative method for an IVP nds
approximations yn to y (xn ), using a rule of the form
Hence, repeated application of g yields the approximations:
y1 = g (x0 ; y0 );
y2 = g (x1 ; y1 ) = g (x1 ; g (x0 ; y0 ))
y3 = g (x2 ; y2 ) = g (x2 ; g (x1 ; y1 )) = g (x2 ; g (x1 ; g (x0 ; y0 )))
.. ..
. .
¨ But how do we pick g ?
Chengjie Cai (UoN) MATH2052 2022-23 4 / 113
Taylor series methods
One family of iterative rules is based on expanding the unknown function y
in the ODE y 0 (x) = f (x; y ) as a Taylor series about a mesh point x , using
the mesh size h; i.e.
A rst-order Taylor series method
For h 1, the higher powers h2 , h3 , : : : are very small, so they can be
neglected, leading to the approximation
(1)
where the RHS of the ODE has been used in the last step.
Chengjie Cai (UoN) MATH2052 2022-23 5 / 113
The forward Euler method
To obtain an iterative rule for the approximation (1), let xn = x0 + nh and
let yn ≈ y (xn ). Then
(2)
This method is called the (forward) Euler method or the Euler-Cauchy
method.
The Euler method only uses the constant term and the term
containing the rst power of h in the Taylor series.
The omission of further terms causes an error, called the truncation
error of the method.
The Euler method is the simplest numerical method for ODEs but is
generally not accurate enough to be used in practice.
Chengjie Cai (UoN) MATH2052 2022-23 6 / 113
The forward Euler method geometric interpretation
The solution
“ ” y (x) is a curve that is everywhere parallel to the vector
1
f (x;y ) .
Exact Solution
y
x0 x1 x2 x3 x4
x
Chengjie Cai (UoN) MATH2052 2022-23 7 / 113
The forward Euler method geometric interpretation
The solution
“ ” y (x) is a curve that is everywhere parallel to the vector
1
f (x;y ) .
The forward
“ Euler” method approximates y (xn+1 ) by moving along the
vector h f (xn1;yn ) from position (xn ; yn ).
See this in action with this GeoGebra le.
Exact Solution
Forward Euler
y
x0 x1 x2 x3 x4
x
Chengjie Cai (UoN) MATH2052 2022-23 8 / 113
Forward Euler method example
Apply Euler's method to approximate the solution to the IVP
y0 = x + y; y (0) = 0; (3)
on 0 ≤ x ≤ 1 with h = 0:2, working to 3 decimal places.
Compute the error at each step.
Solution:
From (3) we have ; so since h = 0:2:
Chengjie Cai (UoN) MATH2052 2022-23 9 / 113
Forward Euler method example
It is often useful to tabulate the results and plot them on a graph.
yn+1 = yn + 0:2(xn + yn ):
n xn = x0 + nh yn
0 0.0
1 0.2
2 0.4
3 0.6
4 0.8
5 1.0
Chengjie Cai (UoN) MATH2052 2022-23 10 / 113
Forward Euler method example
The exact solution to (3) is y = ex − x − 1 (check!). Let's compare our
numerical approximation with the exact solution:
0.8
Exact Solution
0.6 Forward Euler
0.4
y
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0 1.2
x
We can see that our approximation is quite poor, and also gets worse with
each step.
Chengjie Cai (UoN) MATH2052 2022-23 11 / 113
Forward Euler method example error analysis
As the exact solution y = ex − x − 1 is known in this case, we can nd
the absolute errors |y (xn ) − yn | for n = 0; : : : ; 5.
n xn yn e xn − x n − 1 |yn − y (xn )|
0 0.0 0 0 0
1 0.2 0 0.021403 0.021403
2 0.4 0.040000 0.091825 0.051825
3 0.6 0.128000 0.222119
4 0.8 0.273600 0.425541
5 1.0 0.488320 0.718281
The previous plot and the errors above suggest that the approximate
solution given by the FE method with a mesh size of h = 0:2 is not
particularly accurate. This can be improved by reducing h.
Of course, in practice we would not know the exact solution, but
applying the method to a problem we do know the answer for allows
us to better understand the method.
Chengjie Cai (UoN) MATH2052 2022-23 12 / 113
Forward Euler method improving accuracy
Example (you try rst!):
(a) Apply Euler's method to approximate the solution to the IVP
y 0 = e−2x − y ; y (0) = 1 ; (4)
on 0 ≤ x ≤ 0:5, with h = 0:25 and 0:125.
(b) Compare the results with the exact solution y = 2e−x − e−2x .
Solution:
From (4) we have ; and thus:
Chengjie Cai (UoN) MATH2052 2022-23 13 / 113
Forward Euler method improving accuracy
Solution with h = 0:25:
n xn yn y (xn ) |y (xn ) − yn |
0 0
1 0.25
2 0.50
Solution with h = 0:125:
n xn yn y (xn ) |y (xn ) − yn |
0 0
1 0.125
2 0.250
3 0.375 0.926623 0.902212 0.024411
4 0.500 0.869841 0.845182 0.024659
Chengjie Cai (UoN) MATH2052 2022-23 14 / 113
Forward Euler method improving accuracy
Solution with h = 0:0625:
n xn yn y (xn ) |y (xn ) − yn |
0 0 1 1 0
1 0.0625 1 0.996329 0.003671
2 0.1250 0.992657 0.986193 0.006463
3 0.1875 0.979291 0.970769 0.008521
4 0.2500 0.961041 0.951071 0.009969
5 0.3125 0.938884 0.927970 0.010913
6 0.3750 0.913657 0.902212 0.011445
7 0.4375 0.886077 0.874435 0.011641
8 0.5000 0.856751 0.845182 0.011568
In each h, the values in blue and orange are directly comparable, as
they are the errors at x = 0:25 and x = 0:5, respectively.
Let's comparable these errors...
Chengjie Cai (UoN) MATH2052 2022-23 15 / 113
Forward Euler method improving accuracy
h Error at x = 0:25 Error at x = 0:5
0.25 0.048929 0.056451
0.125 0.021279 0.024659
0.0625 0.009969 0.011568
F Halving h roughly halves the error. Check out this GeoGebra le.
1.00
0.95
0.90 Exact Solution
h = 0.25
0.85 h = 0.125
h = 0.0625
0.80
0.0 0.1 0.2 0.3 0.4 0.5
x
Chengjie Cai (UoN) MATH2052 2022-23 16 / 113
Lecture 3 summary
A DE and IC coupled together are called an initial value problem
(IVP).
The forward Euler method is an iterative method for approximating
the solutions to rst-order IVPs.
The method is given by the iterative formula
yn+1 = yn + hf (xn ; yn ); n = 0; 1; 2; : : : ;
where:
xn = x0 + nh are evenly spaced points that are h apart,
yn is our approximation to y (xn ).
The method is simple, but inaccurate.
Accuracy can be improved by reducing h.
Doing the forward Euler method by hand is tedious and it is easy to
make mistakes. We shall soon learn how to write a program in Python
to execute the method.
Chengjie Cai (UoN) MATH2052 2022-23 17 / 113
Lecture 3 optional feedback question
Consider the IVP
y 0 = xy 2 ; y (0) = 1 :
(a) Calculate the exact solution.
(b) Apply Euler's method to approximate the solution to the IVP on
0 ≤ x ≤ 1, with h = 0:25.
(c) Compute the error of the approximation at each mesh point and
briey comment on your ndings.
(d) Comment how we could make our approximation better.
Chengjie Cai (UoN) MATH2052 2022-23 18 / 113
Lecture 4 Analysing the errors of numerical methods
Last time, we:
learnt how to approximate solutions to rst-order IVPs using the
forward Euler method;
evaluated how good the forward Euler method is, and considered how
to make it more accurate.
Today, we will:
learn how to describe and analyse the errors of numerical methods
(through `big O' notation);
consider the next steps in reducing the error of the forward Euler
method;
consider oscillating errors.
Chengjie Cai (UoN) MATH2052 2022-23 19 / 113
Big O notation
Last lecture, we saw the forward Euler method. We saw that the method is
not particularly accurate, but accuracy can be improved by reducing the
step size h.
We want a more formal way to describe the errors of numerical methods.
This is usually done using `big O' notation.
Big O notation
A function f (h) is said to be of order g (h) as h → 0, if there is some
M > 0 and some ‹ > 0 such that
We write as h → 0.
The `as h → 0' is often omitted for convenience.
As h > 0 for our purposes, we only need to consider h < ‹ .
Chengjie Cai (UoN) MATH2052 2022-23 20 / 113
Big O notation condensed version
f (h) = O(g (h)) if there is some M > 0 and some ‹ > 0 such that
|f (h)| ≤ M|g (h)| for h < ‹:
Example: f (h) = h + h2 + h3 (h > 0).
We can show that |f (h)| < 3h for h < 1:
|f (h)| = ˛h + h2 + h3 ˛ =
˛ ˛
So we choose M = , ‹ = and g (h) = , giving
Note: There are many ways to choose acceptable √ M and ‹ . For example,
we could also show that |f (h)| < 2h for ‹ = 2 ( 5 − 1) ≈ 0:62.
1
See this graphically here.
Chengjie Cai (UoN) MATH2052 2022-23 21 / 113
Big O notation a few more details
F Big O notation gives us a way to show that f (h) is bounded by g (h)
for very small h.
For our purposes, f (h) will be a function of interest, and g (h) will be
a single term that is a power of h, e.g. h, h2 , 1.
In the previous example, we could also show that |f (h)| < 3 for h < 1,
giving . While this is true, it is less informative.
An analogy: Suppose the error in a measurement is less than an inch.
While it is also true that the error is less than a mile, it is more
informative to say that it is less than an inch.
F Always pick the most strict order of magnitude you can, as this will be
the most informative.
Chengjie Cai (UoN) MATH2052 2022-23 22 / 113
Big O notation a short-cut
If h is very small, then h2 , h3 , . . . are tiny.
The `big O' picks out the largest term... therefore for our purposes, we
do not have to go through the formal M -‹ process; it is sucient to
simply identify the largest term as h → 0.
For functions that are not polynomials, we need to rst write them out
as Maclaurin series.
Also, we do not need to include any co-ecients; we only need the
power of h.
Example 1: Example 2:
eh = sin(3h) =
As h → 0, the largest term is ; As h → 0, the largest term is ;
therefore, eh = . therefore, sin(3h) = .
Chengjie Cai (UoN) MATH2052 2022-23 23 / 113
Big O notation examples
Now you try! Find the order of the following functions as h → 0.
(a) f1 (h) = h2 + h3 + h4
(b) f2 (h) = 4 cos(h)
2
(c) f3 (h) = 3h + 5h2 −
h
Solution:
(a) The largest term is as h → 0; therefore, f1 (h) =
(b)
The largest term is as h → 0; therefore, f2 (h) =
(c) The largest term is as h → 0; therefore, f3 (h) =
Chengjie Cai (UoN) MATH2052 2022-23 24 / 113
Local and global error
The error due to truncating the Taylor series at each step is called the local
error; whereas, the accumulated error at the end of the calculation, at the
nal value x = X , is referred to as the global error.
Recall that, from the Taylor's expansion,
h2
y (x + h) = y (x) + hf (x; y ) + y 00 (x) + : : :
| {z } |2
Euler Method
{z }
= O(h2 )
The local error for Euler's method is therefore O(h2 ).
Roughly speaking, the global error for Euler's method is therefore
X
O(h2 ) × n = O(h2 ) × = ;
h
where n = X
h is the number of steps.
An actual proof of this is beyond the scope of this course.
Chengjie Cai (UoN) MATH2052 2022-23 25 / 113
Demonstrating the order of the global error graphically
An easy way to show the order of the global error is to use a log-log
plot; i.e. use a logarithmic scale on both axes.
Relationships of the form y = ax k appear as straight lines on a log-log
plot with slope k .
To show this, let X = log x and Y = log y . Take the logarithm of both
sides of the above equation:
This is the equation of a straight line.
Chengjie Cai (UoN) MATH2052 2022-23 26 / 113
Demonstrating the order of the global error graphically
So if we plot the global error, |y (X) − yN | against h on a log-log plot,
and the result is a line of gradient k , then we can say that the global
error is O(hk ).
Let's revisit this table from Lecture 3.
h Error at x = 0:25 Error at x = 0:5
0.25 0.048929 0.056451
0.125 0.021279 0.024659
0.0625 0.009969 0.011568
Let's consider a log-log plot of the global errors at x = 0:5 (the values
in orange), for the 3 dierent values of h.
Chengjie Cai (UoN) MATH2052 2022-23 27 / 113
Error convergence plots
As the graph has roughly a gradient of 1, we can say that the global
error is O(h).
We say that the Euler method has O(h) convergence, and that the
method is an O(h) method.
We call this type of plot an error convergence plot.
Chengjie Cai (UoN) MATH2052 2022-23 28 / 113
The next steps to improving accuracy
We've seen that the forward Euler method has a local error that is
O(h2 ). This is because we neglected terms of O(h3 ) in our Taylor's
expansion.
So it follows that we could reduce the local error to O(h3 ) if we
include the h2 term in the expansion.
This gives the second-order Taylor method:
h2
y (x + h) = y (x) + hy 0 (x) + y 00 (x) + O(h3 ):
| {z 2 }
Second-order Taylor method
The local error for this method is O(h3 ), and we can expect the global
error to be .
We will study this method in detail next lecture.
Chengjie Cai (UoN) MATH2052 2022-23 29 / 113
Oscillating error
From the examples of the forward Euler method in the previous lecture, it
appears as though the error increases as we take more steps; however, as
the following example shows, this is not always the case.
Example: Consider the IVP
y 0 = −y + 2 cos(x); y (0) = 1; (5)
on the interval 0 ≤ x ≤ 2ı, with h = ı=10, ı=20, ı=40.
Solution:
From (5), we have ; and thus:
yn+1 = yn + hf (xn ; yn )
In this case, the exact solution is given by y = cos(x) + sin(x).
Chengjie Cai (UoN) MATH2052 2022-23 30 / 113
Forward Euler method oscillating error
This gure shows our approximation for each h considered. We can see
that the error does not always increase as x increases; the error oscillates.
0 Exact
y
π
h = 10
π
h= 20
−1 h= π
40
π 3π
0 2 π 2 2π
x
Chengjie Cai (UoN) MATH2052 2022-23 31 / 113
Forward Euler method oscillating error
This gure shows the error in our approximation for each h considered.
Although the errors are all bounded, we have a smaller error for smaller h.
0.2
π
h= 10
π
0.1 h= 20
π
y(xn ) − yn
h= 40
0.0
−0.1
−0.2 π 3π
0 2 π 2 2π
x
Chengjie Cai (UoN) MATH2052 2022-23 32 / 113
Lecture 4 summary
The local error is the error at each step in the method. It is due to the
truncation of the Taylor's expansion.
The global error is the accumulated error at the end of the calculation,
at the nal value x = X .
The global error doesn't always increase with each step; it can also
oscillate.
Big O notation gives us a way to show that f (h) is bounded by g (h)
for very small h.
The forward Euler method has local error O(h2 ) and global error O(h).
The global error can by decreased by:
Reducing h.
Taking the next term(s) in the Taylor's expansion (the second-order
Taylor method).
Chengjie Cai (UoN) MATH2052 2022-23 33 / 113
Lecture 4 optional feedback question
(a) Use big O notation to nd the order of each of these functions:
1
(i) f (h) = 100h + 10 − .
h
(ii) g (h) = 2 sin(h2 ).
h2 + h
(iii) p(h) = √ .
h + h3
@f @f
(b) Let f (x; y ) = xy 2 + e−x sin(4y ) . Calculate and .
@x @y
(c) What will the error convergence plot look like for the second-order
Taylor method?
Chengjie Cai (UoN) MATH2052 2022-23 34 / 113
Lecture 5 The second-order Taylor method
Last time, we:
learnt how to describe and analyse the errors of numerical methods
(through `big O' notation);
considered the next steps in reducing the error of the forward Euler
method;
considered oscillating errors.
Today, we will:
derive and apply the second-order Taylor method to solve rst-order
IVPs;
compare this method to the forward Euler method.
Chengjie Cai (UoN) MATH2052 2022-23 35 / 113
Recap Euler method v.s. second-order Taylor method
y (x + h) = y (x) + hy 0 (x) + O(h2 )
| {z }
Forward Euler Method
h2
y (x + h) = y (x) + hy 0 (x) + y 00 (x) + O(h3 ):
| {z 2 }
Second-order Taylor method
The forward Euler method has a local error of O(h2 ), whereas the
second-order Taylor method has a local error of O(h3 ); an order of
magnitude lower.
Given that this method looks to be considerably more accurate than
the Euler method, we should study it.
Chengjie Cai (UoN) MATH2052 2022-23 36 / 113
The second-order Taylor method
Suppose, as before, we are trying to solve the following IVP:
y 0 = f (x; y ); y (x0 ) = y0 : (6)
As in the forward Euler method, we obtain y 0 (x) directly from the
ODE; as y 0 (x) =
To deal with the y 00 (x) term, we dierentiate (6). However, as y is a
function of x , we must use the chain rule:
Thus:
This leads us to the following iterative rule.
Chengjie Cai (UoN) MATH2052 2022-23 37 / 113
The second-order Taylor method
Second-order Taylor series method
Let xn = x0 + nh, n = 0; 1; 2; : : :, with h > 0, and let yn ≈ y (xn ). Then:
(7)
This method has local (truncation) error.
Therefore, roughly speaking, the global error is O(h3 ) × Xh = .
We say the method is an method; it has convergence.
(7) looks rather complicated, but simplies nicely for certain problems,
especially if f (x; y ) is linear in x and y , as the next example shows.
Chengjie Cai (UoN) MATH2052 2022-23 38 / 113
Second-order Taylor method example
Find an approximate solution to the rst-order IVP
y0 = x + y; y (0) = 0; (8)
on the interval 0 ≤ x ≤ 1, using a second-order Taylor series method with
h = 0:2.
Solution:
In this case, f (x; y ) = ; hence:
@f @f
= and = :
@x @y
Thus, applying (7), the method becomes
Chengjie Cai (UoN) MATH2052 2022-23 39 / 113
Second-order Taylor method example
With h = 0:2, the rst two iterations are
y1 =
y2 =
Chengjie Cai (UoN) MATH2052 2022-23 40 / 113
Second-order Taylor method example
As usual, it is helpful to tabulate the iterations:
n xn yn
0 0 0
1 0.2 0.020000
2 0.4 0.088400
3 0.6 0.215848
4 0.8 0.415335
5 1.0 0.702708
Chengjie Cai (UoN) MATH2052 2022-23 41 / 113
Second-order Taylor method example
The exact solution in this case is y (x) = ex − x − 1.
We can therefore compare the errors of the Euler (E) and second-order
Taylor series (TS) methods:
n xn ynT S y (xn ) |ynT S − y (xn )| |ynE − y (xn )|
0 0 0 0 0 0
1 0.2 0.020000 0.021403 0.001403 0.021403
2 0.4 0.088400 0.091825 0.003427 0.051825
3 0.6 0.215848 0.222119 0.094119
4 0.8 0.415335 0.425541 0.151941
5 1.0 0.702708 0.718282 0.229962
The second-order Taylor series method is considerably more accurate
than the Euler method!
Chengjie Cai (UoN) MATH2052 2022-23 42 / 113
Second-order Taylor method example
This plot shows just how much better the second-order Taylor
approximation is for the previous example.
0.8
Exact
0.6 2nd order Taylor
Euler
0.4
y
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
x
See this dierence interactively with this GeoGebra le.
Chengjie Cai (UoN) MATH2052 2022-23 43 / 113
Second-order TS method error convergence plot
The second-order Taylor approximation is (roughly) a straight line with
gradient 2 on a log-log plot, demonstrating that the global error is O(h2 ).
0
log10 (|y(1) − yN |)
−2
Euler
−4
O(h)
2nd TS
−6 O(h2 )
−3 −2 −1 0
log10 (h)
Chengjie Cai (UoN) MATH2052 2022-23 44 / 113
Second-order Taylor method now you try!
Find an approximate solution to the rst-order IVP
y 0 = x 2y ; y (0) = 1; (9)
on the interval 0 ≤ x ≤ 1, using a second-order Taylor series method with
h = 0:25. Calculate the error at each step.
Solution:
In this case, f (x; y ) = ; hence:
@f @f
= and = :
@x @y
Thus, applying (7), the method is
Chengjie Cai (UoN) MATH2052 2022-23 45 / 113
Solution
The exact solution in this case is y (x) = e(x .
3 =3)
n xn yn y (xn ) |yn − y (xn )|
0 0
1 0.25
2 0.50
3 0.75 1.130078 1.150993 0.020915
4 1 1.353141 1.395612 0.042471
The global error at x = 1 is . Compare this with the global
error using the forward Euler method, which is 0.164762. See this
solution graphically here.
Chengjie Cai (UoN) MATH2052 2022-23 46 / 113
Euler method v.s. second-order TS method convergence
Below is a table showing how the global error (GE) at xN = 1 decreases as
h decreases, in the above example, for both the Euler method and the
second-order TS method.
h |yNE − y (xN )| |yNT S − y (xN )|
0.25 0.164762 0.042471
0.125 0.092246 0.011868
0.0625 0.049039 0.003136
As h halves, the GE for the Euler method roughly halves, but the GE for
the TS method roughly quarters.
The GE of the Euler method is O(h). So if we multiply h by 0:5, the
GE is (roughly) multiplied by 0:5.
The GE of the second-order TS method is O(h2 ). So if we multiply h
by 0:5, the GE is (roughly) multiplied by 0:52 , which is 0:25.
Chengjie Cai (UoN) MATH2052 2022-23 47 / 113
Review and higher order methods
The second-order Taylor series method is more accurate than the
Euler method, but it requires the calculation of the partial derivatives
of f (x; y ). This can be dicult, depending on the form of f (x; y ).
Although third, fourth, fth, etc. Taylor series methods can be
derived, nding expressions for the higher derivatives is dicult and
leads to cumbersome formulae.
This then also leads to higher order methods being more
computationally intensive.
So can we derive more accurate methods that aren't as cumbersome??
Yes; a commonly used family of techniques are the Runge-Kutta
methods, which can give higher orders of accuracy, without requiring
the partial derivatives of f (x; y ). We will learn about these in the next
two lectures.
Chengjie Cai (UoN) MATH2052 2022-23 48 / 113
Lecture 5 summary
The second-order Taylor method keeps the h2 term:
h2
y (x + h) = y (x) + hy 0 (x) + y 00 (x) + O(h3 ):
| {z 2 }
Second-order Taylor method
The iterative rule for this method is given by:
h2
„ «
@f @f
yn+1 = yn + hf (xn ; yn ) + (xn ; yn ) + f (xn ; yn ) (xn ; yn )
2 @x @y
Pros and cons (c.f. the Euler method):
X Much more accurate.
× Requires calculation of the partial derivatives of f (x; y ), which may be
dicult.
× More computationally intensive.
Chengjie Cai (UoN) MATH2052 2022-23 49 / 113
Lecture 5 optional feedback question
Consider the IVP
y 0 = xy 2 ; y (0) = 1 :
(a) Write down the exact solution (we calculated this in the feedback Q
from lecture 3).
(b) Apply the second-order Taylor method to approximate the solution to
the IVP on 0 ≤ x ≤ 1, with h = 0:25.
(c) Compute the error of the approximation at each mesh point and
compare them to those from the Euler method (feedback Q from
lecture 3).
(d) The Euler method and second-order Taylor method are described as
O(h) and O(h2 ) methods respectively. Write down what this actually
means mathematically.
Chengjie Cai (UoN) MATH2052 2022-23 50 / 113
Lecture 6 The modied Euler method
Last time, we:
derived and applied the second-order Taylor method to solve rst-order
IVPs;
compared this method to the forward Euler method. We found that
the second-order Taylor method:
X is much more accurate;
× can be dicult to compute, due to the requirement of the partial
derivatives of f (x; y ).
Today, we will:
derive and apply the modied Euler method; an O(h2 ) method that is
(in many ways) superior to the second-order Taylor method.
Chengjie Cai (UoN) MATH2052 2022-23 51 / 113
Numerical methods for DEs our current arsenal
In the last few lectures, we discussed the Euler and Taylor series methods:
The (forward) Euler method is explicit and uses just the rst-order
derivative, provided by the original ODE.
Higher order Taylor methods are explicit, but use higher order
derivatives that must be derived from the original ODE.
Higher order Taylor methods are more accurate than the Euler method,
but are also more computationally intensive and can be cumbersome.
We now seek methods that are O(h2 ), but do not require the
calculation of higher order derivatives. Two such methods are the
(implicit) trapezoidal method and the (explicit) modied Euler
method.
Chengjie Cai (UoN) MATH2052 2022-23 52 / 113
The trapezoidal method
The trapezoidal method is an example of an implicit method.
It can be thought of as the trapezium rule applied to ODEs.
If we integrate the ODE y 0 = f (x; y ), between xn and xn+1 , we have
Chengjie Cai (UoN) MATH2052 2022-23 53 / 113
The trapezoidal method discussion
f (x, y(x))
R xn+1
≈ xn
f (x, y) dx
xn xn+1 x
The trapezoidal method:
does not use higher order derivatives.
is a second-order (O(h2 )) (implicit) method (proof not given).
is widely used to solve PDEs on discrete space-time grids (not covered
here).
Chengjie Cai (UoN) MATH2052 2022-23 54 / 113
The trapezoidal method discussion
The trapezoidal method uses the average of f (x; y ) at xn and xn+1 .
The iterative rule for this method is therefore:
We notice that yn+1 appears on both sides of the equation; the
formula is implicit in yn+1 .
Suppose that f (x; y ) is linear in y ; for example, f (x; y ) = x + y . Then
the iterative formula becomes
It is still relatively straightforward to nd yn+1 given yn .
Chengjie Cai (UoN) MATH2052 2022-23 55 / 113
The trapezoidal method discussion
Suppose instead that f (x; y ) is nonlinear in y ; for example,
f (x; y ) = sin(y ). Then, the iterative formula becomes
Finding yn+1 in this case is much harder!
Usually a root nding algorithm is required, such as the
Newton-Raphson method.
The trapezoidal method is an example of an implicit method.
Although it can be tricky to implement, it is an O(h2 ) method, so will
be much more accurate than the Euler method.
Chengjie Cai (UoN) MATH2052 2022-23 56 / 113
The modied Euler method
We would like to modify the trapezoidal method to produce a scheme
that:
keeps the second-order convergence property;
is explicit.
Consider the modied scheme:
h ˆ ∗
˜
yn+1 = yn + f (xn ; yn ) + f (xn+1 ; yn+1 ) ; n = 0; 1; : : :
2
We should choose yn+1
∗ so that it is easy to compute directly from yn
and is an approximation to y (xn+1 ).
A simple choice is to use the Euler method to nd yn+1
∗ :
∗
yn+1 =
This method is called the modied Euler method or Heun's method.
Chengjie Cai (UoN) MATH2052 2022-23 57 / 113
The modied Euler method
The modied Euler method can thus be written as the two stage
method: ∗
yn+1 = yn + hf (xn ; yn );
h ˆ ∗
˜
yn+1 = yn + f (xn ; yn ) + f (xn+1 ; yn+1 ) :
2
The geometric reasoning behind this method is shown below.
Exact Solution Find yn+1
∗
with the forward
Forward Euler Euler method.
Mod. Euler Calculate f (slope) at
∗
(xn+1 ; yn+1 ).
y
To get yn+1 , travel rst
half using slope from Euler
∗
f (xn+1 , yn+1 )
method, and second half
using slope at (xn+1 ; yn+1
∗
).
x0 h x1
x0 + 2
x
Chengjie Cai (UoN) MATH2052 2022-23 58 / 113
Runge-Kutta methods
The modied Euler method is an example of a predictor-corrector
method.
At each step we rst predict (Euler method) a value, and then correct
to a value that is more accurate.
The modied Euler method is an example from a class of
predictor-corrector methods called Runge-Kutta methods.
We avoid the computation of the derivatives of f by replacing them
with values of f evaluated at one or more particular pairs of values of
(x; y ).
These particular values of (x; y ) are chosen to make the order
(accuracy) of the method as high as possible.
Two such methods of practical importance are:
modied Euler method (second-order Runge-Kutta method);
RK4 method (fourth-order Runge-Kutta method; studied in detail later
in the course).
Chengjie Cai (UoN) MATH2052 2022-23 59 / 113
Auxiliary values notation
It is common to write Runge-Kutta methods using the auxiliary values
notation.
Auxiliary values are typically denoted by k . For the modied Euler
method, the auxiliary values are
k1 = hf (xn ; yn )
k2 = hf (xn+1 ; yn + k1 )
and the full method can then be written as
Runge-Kutta methods of higher order use similar notation.
For example, the fourth-order Runge-Kutta method (RK4) uses
auxiliary values k1 ; k2 ; k3 and k4 .
Chengjie Cai (UoN) MATH2052 2022-23 60 / 113
The modied Euler method example
Apply the modied Euler method to approximate the solution to the IVP
y0 = x + y; y (0) = 0; (10)
over 0 ≤ x ≤ 1 with h = 0:2, using the auxiliary value notation.
Solution:
Write down k1 and k2 in terms of xn and yn :
k1 = hf (xn ; yn )
k2 = hf (xn+1 ; yn + k1 )
Chengjie Cai (UoN) MATH2052 2022-23 61 / 113
The modied Euler method example
First iteration:
k1 = 0:2(x0 + y0 ) =
k2 = 0:2[(x0 + 0:2) + (y0 + k1 )] =
1
y1 = y0 + (k1 + k2 ) =
2
Second iteration:
k1 = 0:2(x1 + y1 ) =
k2 = 0:2[(x1 + 0:2) + (y1 + k1 )] =
y2 =
Chengjie Cai (UoN) MATH2052 2022-23 62 / 113
The modied Euler method example
We would then keep going until we had obtained all of the iterations that
we wanted.
Further values can be tabulated (as usual).
n xn yn
0 0 0
1 0.2 0.0200
2 0.4 0.0884
3 0.6 0.2158
4 0.8 0.4153
5 1.0 0.7027
Chengjie Cai (UoN) MATH2052 2022-23 63 / 113
The modied Euler method accuracy
This plot shows just how superior the modied Euler method is over the
original Euler method.
0.8
Exact
0.6 Euler
Mod. Euler
0.4
y
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
x
See this interactively with this GeoGebra le.
Chengjie Cai (UoN) MATH2052 2022-23 64 / 113
The modied Euler method accuracy
Using the exact solution y (x) = e−x − x − 1, we can make a comparison
between the Euler and modied Euler methods.
n xn ynME y (xn ) |ynME − y (xn )| |ynE − y (xn )|
0 0 0 0 0 0
1 0.2 0.0200 0.0214 0.0014 0.0214
2 0.4 0.0884 0.0918 0.0034 0.0518
3 0.6 0.2158 0.2221 0.0063 0.0941
4 0.8 0.4153 0.4255 0.0102 0.1519
5 1.0 0.7027 0.7183 0.0156 0.2300
Errors comparison between the Euler (E) and modied Euler (ME)
methods, over 0 ≤ x ≤ 1 with h = 0:2, for IVP (10).
Chengjie Cai (UoN) MATH2052 2022-23 65 / 113
The modied Euler method error convergence plot
This error convergence plot shows that the modied Euler (ME) method
does indeed achieve O(h2 ) convergence.
0
log10 (|y(1) − yN |)
−2
Euler
−4
O(h)
ME
−6 O(h2 )
−3 −2 −1 0
log10 (h)
Chengjie Cai (UoN) MATH2052 2022-23 66 / 113
The modied Euler method now you try!
Find an approximate solution to the rst-order IVP
y 0 = x 2y ; y (0) = 1; (11)
on the interval 0 ≤ x ≤ 1, using the modied Euler method with h = 0:25.
Calculate the error at each step and compare the global error to that of the
ordinary Euler and second-order Taylor methods (from example 2 in
Lecture 5).
Solution:
Chengjie Cai (UoN) MATH2052 2022-23 67 / 113
Solution
First iteration:
k1 =
k2 =
y1 =
Second iteration:
k1 =
k2 =
y2 =
Chengjie Cai (UoN) MATH2052 2022-23 68 / 113
The modied Euler method example 2
We worked out last lecture that the exact solution in this case is
y (x) = e(x =3) .
3
n xn ynME y (xn ) |ynME − y (xn )|
0 0
1 0.25
2 0.50
3 0.75 1.158681 1.150993 0.007688
4 1 1.405353 1.395612 0.009741
See this solution graphically here.
At x = 1, the global errors are:
0.164762 for the forward Euler method.
0.042471 for the second-order Taylor method.
for the modied Euler method.
In this particular example, the modied Euler method is signicantly
more accurate than even the second-order Taylor method!
Chengjie Cai (UoN) MATH2052 2022-23 69 / 113
Lecture 6 summary
The modied Euler method can be written as the two stage iterative
rule: ∗
yn+1 = yn + hf (xn ; yn );
h ˆ ∗
˜
yn+1 = yn + f (xn ; yn ) + f (xn+1 ; yn+1 ) :
2
In auxiliary value notation, this is written
1
yn+1 = yn + (k1 + k2 );
2
where k1 = hf (xn ; yn ) and k2 = hf (xn+1 ; yn + k1 ).
The modied Euler method:
X is explicit (i.e. yn+1 is only on the LHS of the iterative rule);
X is an O(h2 ) method;
X does not require the calculation of any derivatives of f (x; y ).
Due to these desirable attributes, this method is our favourite (so far!).
Chengjie Cai (UoN) MATH2052 2022-23 70 / 113
Lecture 6 optional feedback question
Consider the IVP from the previous feedback Q:
y 0 = xy 2 ; y (0) = 1 :
(a) Apply the modied Euler method to approximate the solution to the
IVP on 0 ≤ x ≤ 1, with h = 0:25.
(b) Compute the error of the approximation at each mesh point and
compare them to those from the Euler and second-order Taylor
methods (feedback Q from lecture 5).
Chengjie Cai (UoN) MATH2052 2022-23 71 / 113
Lecture 7 The RK4 method
Last time, we introduced the modied Euler (ME) method.
An explicit version of the implicit trapezoidal method.
Like the Euler method, the ME method only requires evaluation of
f (x; y ) at certain points (no higher derivatives).
An O(h2 ) method; i.e. is more accurate than the Euler method.
Is an example of a Runge-Kutta method.
This lecture is dedicated to the fourth-order Runge-Kutta (RK4) method.
We will:
dene the method (using auxiliary values notation);
go through worked examples, highlighting improved accuracy over the
Euler and ME methods;
review the method.
Chengjie Cai (UoN) MATH2052 2022-23 72 / 113
Runge-Kutta Methods
The Euler and modied Euler methods are Runge-Kutta methods of rst-
and second-order respectively.
Runge-Kutta methods are predictor-corrector methods.
Runge-Kutta methods do not require the computation of derivatives;
unlike the second-order Taylor series method. Instead, they use
evaluations of f (x; y ).
When they agree with the Taylor series up to and including the term
in hp , the order of the Runge-Kutta method is then p .
One particular fourth-order Runge-Kutta method is very commonly
used in practice and is often referred to as the Runge-Kutta method.
Chengjie Cai (UoN) MATH2052 2022-23 73 / 113
The fourth-order Runge-Kutta method (RK4)
The (fourth-order) Runge-Kutta method (RK4) for the rst-order IVP
y 0 = f (x; y ); y (x0 ) = y0 ;
is given by the iterative rule
1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 );
6
with auxiliary values
k1 = hf (xn ; yn ) (initial slope)
k2 = hf (xn + 12 h; yn + 12 k1 ) (rst estimate of middle slope)
k3 = hf (xn + 12 h; yn + 12 k2 ) (second estimate of middle slope)
k4 = hf (xn + h; yn + k3 ) (estimate of nal slope)
Proving that this method is fourth-order is very tedious and is omitted.
Chengjie Cai (UoN) MATH2052 2022-23 74 / 113
RK4 for a rst-order IVP example
Use the RK4 method to nd an approximate solution to the IVP
y 0 = x 2 + y 2; y (1) = 1:5; (12)
over the interval 1 ≤ x ≤ 1:2 with h = 0:1.
Solution:
For (12), we have f (x; y ) =
If xn = x0 + nh and yn ≈ y (xn ), then x0 = and y0 =
The RK4 method then gives
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 ) ;
6
where k1 , k2 , k3 and k4 are calculated as follows (for the rst iteration).
Chengjie Cai (UoN) MATH2052 2022-23 75 / 113
RK4 for a rst-order IVP example
k3 = hf x0 + 12 h; y0 + 12 k2
` ´
k1 = hf (x0 ; y0 )
= =
= =
= =
k2 = hf x0 + 12 h; y0 + 12 k1
` ´
k4 = hf (x0 + h; y0 + k3 )
= =
= =
= =
Chengjie Cai (UoN) MATH2052 2022-23 76 / 113
RK4 for a rst-order IVP example
Putting everything together leads to
y1 = y0 + 61 (k1 + 2k2 + 2k3 + k4 )
Repeating the same process but using the newly updated values for the ki
and yn gives
y2 = y1 + 61 (k1 + 2k2 + 2k3 + k4 )
= 1:8954 + 61 (0:4802 + 1:1764 + 1:2232 + 0:7725)
= 2:5041:
Chengjie Cai (UoN) MATH2052 2022-23 77 / 113
RK4 method geometrical interpretation
The plot on the following slide shows how the RK4 trajectory is
obtained graphically.
In this (dierent) example, x0 = 0, y0 = 0:01, h = 0:2, and:
k1 = 0:02; k3 = 0:10;
k2 = 0:06; k4 = 0:26:
This then gives y1 = 0:01 + 61 (0:02 + 2(0:06 + 0:10) + 0:26) = 0:11.
The RK4 method uses these slope estimates (the ki ) to `scout the
nearby slope eld' before deciding where to move to.
Deriving the specic form of the RK4 is beyond the scope of this
course.
Chengjie Cai (UoN) MATH2052 2022-23 78 / 113
RK4 method geometrical interpretation
k1 = 0:02; k2 = 0:06; k3 = 0:10; k4 = 0:26.
x0 = 0; h = 0:2; y0 = 0:01; y1 = 0:11.
Chengjie Cai (UoN) MATH2052 2022-23 79 / 113
RK4 method advantages and disadvantages
X Although quite laborious to do by hand, the method is easy to code
and uses little memory.
X The method is very accurate for a suciently small step size.
X There is no need for a special starting procedure.
(As is required for some higher order methods.)
× Frequent evaluations of f (x; y ) (4 per step) can be time-consuming if
f (x; y ) is hard to compute.
× RK methods can be unstable; i.e. the numerical solution very rapidly
diverges from the exact one and blows up if h is too large.
(We will look into this in more detail in future lectures).
Chengjie Cai (UoN) MATH2052 2022-23 80 / 113
The RK4 method example 2
Use the RK4 method to approximate the solution to the IVP
y0 = x + y; y (0) = 0; (13)
over the interval 0 ≤ x ≤ 1 with h = 0:2.
Solution:
For (13), we have
The RK4 method is
1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 ) :
6
Chengjie Cai (UoN) MATH2052 2022-23 81 / 113
The RK4 method example 2
The auxiliary values (for the rst iteration) are:
k1 = hf (x0 ; y0 ) =
k2 = hf (x0 + 12 h; y0 + 12 k1 ) =
k3 = hf (x0 + 12 h; y0 + 12 k2 ) =
k4 = hf (x0 + h; y0 + k3 ) =
Chengjie Cai (UoN) MATH2052 2022-23 82 / 113
The RK4 method example 2
This leads to
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 )
6
=
The iterations can be tabulated as usual:
n xn yn
0 0 0
1 0.2 0.021400
2 0.4 0.091818
3 0.6 0.222106
4 0.8 0.425520
5 1.0 0.718250
Chengjie Cai (UoN) MATH2052 2022-23 83 / 113
The RK4 method example 2
Again, using the exact solution y = exn − xn − 1, we can nd the errors at
each xn .
n xn yn y (xn ) |yn − y (xn )|
0 0 0 0 0
1 0.2 0.021400 0.021403 0.000003
2 0.4 0.091818 0.091825 0.000007
3 0.6 0.222106 0.222119 0.000012
4 0.8 0.425520 0.425541 0.000020
5 1.0 0.718250 0.718282 0.000031
Chengjie Cai (UoN) MATH2052 2022-23 84 / 113
The RK4 method example 2
As the table below shows, the RK4 approximation to the solution of (13) is
clearly much closer to the exact solution than the approximations obtained
using the Euler (E) and modied Euler (ME) methods.
n xn y (xn ) |ynE − y (xn )| |ynME − y (xn )| |ynRK4 − y (xn )|
0 0 0 0 0 0
1 0.2 0.021403 0.021 0.0014 0.000003
2 0.4 0.091825 0.052 0.0034 0.000007
3 0.6 0.222119 0.094 0.0063 0.000012
4 0.8 0.425541 0.152 0.0102 0.000020
5 1.0 0.718282 0.230 0.0156 0.000031
See this interactively here.
Chengjie Cai (UoN) MATH2052 2022-23 85 / 113
The RK4 method error convergence plot
The following error convergence plot shows an error comparison
between the Euler, modied Euler (ME) and RK4 methods for (13).
The fourth-order nature of the RK4 method is revealed; the gradient
of the RK4 line is .
0
−2
−4
log10 (|y(1) − yN |)
−6
−8
−10 Euler
O(h)
−12 ME
O(h2 )
−14 RK4
O(h4 )
−16
−3.5 −3.0 −2.5 −2.0 −1.5 −1.0 −0.5 0.0
log10 (h)
Chengjie Cai (UoN) MATH2052 2022-23 86 / 113
The RK4 method now you try!
Find an approximate solution to the rst-order IVP
y 0 = x 2y ; y (0) = 1; (14)
on the interval 0 ≤ x ≤ 0:5, using the RK4 method with h = 0:25.
Calculate the error at each step and compare the global error to that of the
ordinary Euler and ME methods (from example 2 in Lecture 6).
Solution:
k1 = hf (xn ; yn ) =
k2 = hf xn + 12 h; yn + 21 k1 =
` ´
k3 = hf xn + 21 h; yn + 21 k2 =
` ´
k4 = hf (xn + h; yn + k3 ) =
Chengjie Cai (UoN) MATH2052 2022-23 87 / 113
The RK4 method example 3
First iteration: k1 =
k2 =
k3 =
k4 =
y1 =
Second iteration: k1 =
k2 =
k3 =
k4 =
y2 =
Chengjie Cai (UoN) MATH2052 2022-23 88 / 113
The RK4 method example 3
The exact solution in this case is y (x) = e(x .
3 =3)
xn ynRK4 y (xn ) |ynRK4 − y (xn )|
0
0.25
0.5
At x = 0:5, the global errors are:
0.02692 for the forward Euler method.
0.00513 for the ME method.
for the RK4 method.
See this graphically here.
The RK4 method reigns supreme!
Chengjie Cai (UoN) MATH2052 2022-23 89 / 113
Lecture 7 summary
The (fourth-order) Runge-Kutta method (RK4) for the IVP
y 0 = f (x; y ); y (x0 ) = y0 ;
1
is given by the iterative rule yn+1 = yn + (k1 + 2k2 + 2k3 + k4 ),
6
with auxiliary values
k1 = hf (xn ; yn ); k2 = hf (xn + 21 h; yn + 12 k1 );
k3 = hf (xn + 21 h; yn + 12 k2 ); k4 = hf (xn + h; yn + k3 ):
The method:
X is explicit.
X is an O(h4 ) method.
X doesn't require any higher order derivatives of f (x; y ).
× can be unstable if h is too large.
Overall, this method is superior to the forward Euler, Taylor series and
modied Euler methods.
Chengjie Cai (UoN) MATH2052 2022-23 90 / 113
Lecture 7 optional feedback question
Consider the IVP from the previous homework:
y 0 = xy 2 ; y (0) = 1 :
(a) Apply the RK4 method to approximate the solution to the IVP on
0 ≤ x ≤ 0:5, with h = 0:25.
(b) Compute the error of the approximation at each mesh point and
compare them to those from the Euler and ME methods (homework
from lecture 6).
Chengjie Cai (UoN) MATH2052 2022-23 91 / 113
Lecture 8 Solving second-order ODEs and systems of
rst-order ODEs numerically
Last time, we:
dened the RK4 method (using auxiliary values notation);
used the RK4 method to solve IVPs, highlighting improved accuracy
over the Euler and ME methods;
reviewed the method.
Today, we will:
learn how to solve systems of rst-order IVPs numerically;
learn how to solve second-order IVPs numerically.
Chengjie Cai (UoN) MATH2052 2022-23 92 / 113
Motivation
So far, we have only learned how to solve scalar, rst-order IVPs.
Many interesting problems in chemical engineering involve rst-order
IVPs for multiple coupled (dependent) variables, e.g. chemical
reactions.
Other interesting problems involve second-order IVPs, e.g. control
theory.
We therefore need to be able to solve systems of rst-order IVPs, and
second-order IVPs.
We start with the simpler case: systems of rst-order IVPs.
Chengjie Cai (UoN) MATH2052 2022-23 93 / 113
A system of 1st-order ODEs
Suppose we have m coupled rst-order IVPs as follows:
y10 (x) = f1 (x; y1 ; y2 ; : : : ; ym ); y1 (0) = y1 0 ;
y20 (x) = f2 (x; y1 ; y2 ; : : : ; ym ); y2 (0) = y2 0 ;
.. ..
. .
0
ym (x) = fm (x; y1 ; y2 ; : : : ; ym ); ym (0) = ym 0 :
We can write this system in vector form
y 0 (x) = f (x; y ); y (0) = y 0 ;
0 1 0 1 0 1
y1 f1 y1 0
B y2 C B f2 C B y2 C
B 0C
where y = B
B .. C , f = B .. C and y 0 =
C B C
B .. C .
@ . A @.A @ . A
ym fm ym 0
We essentially replace y with y , and f with f .
Chengjie Cai (UoN) MATH2052 2022-23 94 / 113
A system of 1st-order ODEs Euler method
In vector form, the Euler method looks like
where y n = (y1 n ; y2 n ; : : : ; ym n ) .
y n is a vector containing our estimates for each variable y1 ; y2 ; : : : ; ym
evaluated at xn ; i.e.
y n ≈ y (xn ):
This is the multi-variable version of
yn ≈ y (xn ):
Let's look at an example of the simplest case: a pair of ODEs.
Chengjie Cai (UoN) MATH2052 2022-23 95 / 113
A system of 1st-order ODEs Euler method
Consider the variables of interest y (x) and z(x).
Use Euler's method to solve the IVP
y0 = z = f1 (x; y ; z);
0 2
z = x − 9y = f2 (x; y ; z);
with initial conditions y (0) = 0 and z(0) = 1, on the interval 0 ≤ x ≤ 0:3
with step size h = 0:05.
Solution:
Let yn ≈ y (xn ) and zn ≈ z(xn ). Write the system in vector form:
The Euler method becomes
Chengjie Cai (UoN) MATH2052 2022-23 96 / 113
A system of 1st-order ODEs Euler method
Carrying out the rst iteration gives
„ « „ « „ «
y1 y0 z0
= +h 2
z1 z0 x0 − 9y0
Carrying out the second iteration gives
„ « „ « „ «
y2 y1 z1
= +h 2
z2 z1 x1 − 9y1
Chengjie Cai (UoN) MATH2052 2022-23 97 / 113
A system of 1st-order ODEs Euler method
As before, it is helpful to put the iterative results in a table.
n xn yn zn
0 0
1 0.05
2 0.10
3 0.15 0.148881 0.933125
4 0.20 0.195537 0.867253
5 0.25 0.238900 0.781262
6 0.30 0.277963 0.676881
Doing this by hand is slow and tedious, but it is (fairly) easy to adapt
our scalar IVP Python programs for y and f to deal with vectors y
and f .
Chengjie Cai (UoN) MATH2052 2022-23 98 / 113
A system of 1st-order ODEs Euler method
The exact solution for y (x) is
1 2
y (x) = 9x + 13 sin(3x) + 2
81 cos(3x) − 2
81 :
For a variety of h, we can compute the error at x = 1 and observe how
this error behaves, noticing that y (1) = 0:1090154966.
h yN |y (1) − yN |
1=128 0.1099709981 9:56 × 10−4
1=256 0.1094419772 4:26 × 10−4
1=512 0.1092162129 2:01 × 10−4
1=1024 0.1091127603 9:73 × 10−5
1=2048 0.1090633594 4:79 × 10−5
1=4096 0.1090392363 2:37 × 10−5
Here, N = 1=h, so that xN = 1.
The error roughly halves when h is halved; we have O(h) convergence.
Chengjie Cai (UoN) MATH2052 2022-23 99 / 113
A system of 1st-order ODEs Euler method
Below is an error convergence plot, showing that we indeed have O(h)
convergence;
−2
log10 (|y(1) − yN |)
−4
Error
−6 O(h)
−5 −4 −3 −2 −1
log10 (h)
Chengjie Cai (UoN) MATH2052 2022-23 100 / 113
A system of 1st-order ODEs Euler method
Here is a plot of our approximations compared to the exact solutions
for 0 ≤ x ≤ 0:3.
The solution for z(x) can be obtained by dierentiating y (x) (since
y 0 = z ).
Chengjie Cai (UoN) MATH2052 2022-23 101 / 113
A system of 1st-order ODEs Euler method
As expected of an O(h) method, the approximations quickly become
poor as x increases further, due to the accumulating local truncation
errors.
Chengjie Cai (UoN) MATH2052 2022-23 102 / 113
A system of 1st-order ODEs modied Euler method
The modied Euler method in vector form is
where
If „ «
y
y= ;
z
then we can write k 1 and k 2 in component form as
k1 = and k 2 =
Chengjie Cai (UoN) MATH2052 2022-23 103 / 113
A system of 1st-order ODEs modied Euler method
So for the pair of ODEs
! !
y 0 (x) f1 (x; y ; z)
= ;
z 0 (x) f2 (x; y ; z)
we have
k1y
!
=
k1z
k2y
!
=
k2z
Chengjie Cai (UoN) MATH2052 2022-23 104 / 113
Modied Euler method now you try!
Use the modied Euler method to solve the IVP from the previous example:
y0 = z = f1 (x; y ; z); y (0) = 0;
0 2
z = x − 9y = f2 (x; y ; z); z(0) = 1;
on the interval 0 ≤ x ≤ 0:1 with step size h = 0:05.
Solution:
For this system of IVPs, we have
k1y
!
k1 = =
k1z
k2y
!
k2 = =
k2z
Chengjie Cai (UoN) MATH2052 2022-23 105 / 113
Solution
First iteration:
k1 =
k2 =
1
y 1 = y 0 + (k 1 + k 2 )
2
Chengjie Cai (UoN) MATH2052 2022-23 106 / 113
Solution
Second iteration:
k1 =
k2 =
1
y 2 = y 1 + (k 1 + k 2 )
2
Chengjie Cai (UoN) MATH2052 2022-23 107 / 113
Second-order Taylor method
Recap: For a scalar IVP, the second-order Taylor method is
h2
„ «
@f @f
yn+1 = yn + hf (xn ; yn ) + (xn ; yn ) + f (xn ; yn ) (xn ; yn ) :
2 @x @y
The second-order Taylor method in vector form is
where J is the Jacobian of f .
„ «
y
If y = ; then J is given by
z
J =
Chengjie Cai (UoN) MATH2052 2022-23 108 / 113
Second-order Taylor method for a pair of IVPs
Suppose we have the following system of two IVPs:
y 0 = f1 (x; y ; z); y (x0 ) = y0 ;
0
z = f2 (x; y ; z); z(x0 ) = z0 :
The iterative rule for the second-order Taylor method becomes
yn+1 =
zn+1 =
Chengjie Cai (UoN) MATH2052 2022-23 109 / 113
Second-order IVPs
We have seen how to solve systems of rst-order IVPs. Now what about
solving a second-order IVP?
Dene a new variable equal to the derivative of the original dependent
variable.
Re-write the original ODE in terms of the new variable where possible.
Example: y 00 + 4y 0 + 5y = e−x .
Dene , giving . Substitute these into the ODE:
We now have two coupled rst-order ODEs:
y0 =
z0 =
We can then solve this system like in the previous e.g. to nd y (x).
Chengjie Cai (UoN) MATH2052 2022-23 110 / 113
Converting a 2nd-order IVP into a pair of 1st-order IVPs
You try rst!
Convert the following second-order IVP into a pair of rst-order IVPs:
y 00 + y y 0 = cos(x):
Solution:
Dene , giving . Substitute these into the ODE:
We now have two coupled rst-order ODEs:
y0 =
z0 =
Chengjie Cai (UoN) MATH2052 2022-23 111 / 113
Lecture 8 summary
We can solve a system of rst-order IVPs numerically using any of the
numerical methods we have considered so far. We just replace y with
y , and f with f .
To solve a second-order IVP, we must rst convert it into a system of
rst-order IVPs:
Dene a new variable equal to the derivative of the original dependent
variable.
Re-write the original ODE in terms of the new variable where possible.
Chengjie Cai (UoN) MATH2052 2022-23 112 / 113
Lecture 8 optional feedback question
(a) Use the second-order Taylor method to solve this IVP from earlier in
the lecture:
y0 = z = f1 (x; y ; z); y (0) = 0;
0 2
z = x − 9y = f2 (x; y ; z); z(0) = 1;
on the interval 0 ≤ x ≤ 0:1 with h = 0:05.
(b) The exact solution for y (x) is
1 1 2 2
y (x) = x 2 + sin(3x) + cos(3x) − :
9 3 81 81
Compute the absolute error in y (x) at each step, and compare them to
those of the Euler method (that were obtained earlier in this lecture).
Chengjie Cai (UoN) MATH2052 2022-23 113 / 113