Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
26 views18 pages

Numerical Integration

Uploaded by

fedorvonbock2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views18 pages

Numerical Integration

Uploaded by

fedorvonbock2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

CUST 2023-2024

Maths for Physics 2

Session 3

Numerical integration
Summary

Here we discuss some methods that can be used in order to obtain numerical ap-
proximations of integrals that we are not able to compute analytically. The methods
that are discussed here are all related to the familiar idea that integrals correspond
to the area under the curve representative of the function to integrate. We discuss
in particular the rectangle rule (closely related to the so-called Riemann sum and
Riemann integral) and the trapezoidal rule.

i
Contents

1 Approximation of integrals and the “big O” notation O 1


1.1 General idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The big O notation O . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Illustration of the big O notation: Taylor series . . . . . . . . . . . . 3

2 General problem for numerical integration 7

3 The rectangle rule: Riemann sum and Riemann integral 9


3.1 Riemann sum and Riemann integral . . . . . . . . . . . . . . . . . . . 9
3.2 Error of the rectangle rule . . . . . . . . . . . . . . . . . . . . . . . . 10

4 The trapezoidal rule 14

ii
Chapter 1

Approximation of integrals and the


“big O” notation O

1.1 General idea


Differentiation and integration routinely appear in the daily life of engineers and
scientists. Differentiation is easy: if we are given a function f (x), we can always, in
principle, use the basic rules of differentiation (product rule, chain rule) to get the
derivative f ′ (x). Even if the result is complicated, nothing forbids this in principle.
However, the situation is drastically different regarding integration: if we are given
a function f (x), it may be the case that we are actually not able to compute the
integral of f (for instance because we are not able to find an antiderivative of f ).
As a matter of fact, the vast majority of integrals that are encountered in real-life
problems are unfortunately impossible to compute analytically (i.e. to express in a
closed form involving elementary functions). A crucial question is thus: what can
we do? A possible alternative is to numerically compute such integrals.
Many methods of numerical integration rely on the intuitive idea that the integral
of a function f can be viewed as the area under the curve representative of f : the
idea is then to approximate this area by a sum of areas of simple geometrical figures,
whose areas are easy to compute. The methods that we consider here work with the
simplest geometrical figures that we can think of:

• rectangles, which leads to the so-called rectangle rule and the closely related
Riemann integral, which hence approximates an integral by a sum of areas of
rectangles (such a sum being called a Riemann sum);

• trapezoids, which leads to the so-called trapezoidal rule, which hence approxi-
mates an integral by a sum of areas of trapezoids.

We could of course also think about other, and potentially more precise, geo-
metrical figures, but here we’ll restrict to rectangles and trapezoids, just to see the

1
CHAPTER 1. APPROXIMATION OF INTEGRALS AND. . . 2

general idea.
A common feature of the various methods that can be used to numerically com-
pute an integral is that any such method is an approximation of the actual value of
the integral: this hence means that to any such method is mandatorily associated
a certain error. In view of obtaining a quantitative estimate of such an error, a
particular notation will prove to be particularly useful: the so-called big O notation
O, which we discuss now.

1.2 The big O notation O


Quite generally, the big O notation O is adequately used whenever we want to
explicitly write down the limiting behavior of a function f (x) as the variable x takes
on values close to some value x0 .
Mathematically, the big O notation is defined as follows. Let’s take two functions
g1 (x) and g2 (x), with x ∈ R. Let’s now consider a particular value x0 ∈ R, such that
both functions g1 and g2 have a limit as x → x0 , i.e.

lim g1 (x) = g1 (x0 ) and lim g2 (x) = g2 (x0 ) exist . (1.1)


x→x0 x→x0

We then say that g1 is asymptotically bounded by g2 as x → x0 , and we denote this


by

g1 (x) = O [g2 (x)] as x → x0 , (1.2)

if there exist two constants δ, K > 0 such that, for any x such that 0 < |x − x0 | < δ,
we have

|g1 (x)| ⩽ K |g2 (x)| . (1.3)

Alternatively, this definition (1.3) can also, in practice, be merely written as

g1 (x)
lim ⩽K, (1.4)
x→x0 g2 (x)

assuming of course that the ratio g1 /g2 is well defined in the neighborhood of x0 .

REMARK ON TERMINOLOGY AND NOTATION: instead of saying “g1 is


asymptotically bounded by g2 as x → x0 ”, very often, we’ll rather say, more infor-
mally, that “g1 is a big O of g2 as x → x0 ” or that “g1 scales as g2 as x → x0 ”.
Furthermore, very often, when it is clear from the context, we’ll merely write (1.2)
as g1 (x) = O [g2 (x)], without specifying “as x → x0 ”. Typically we work in the limit
CHAPTER 1. APPROXIMATION OF INTEGRALS AND. . . 3

cases x0 = 0 or x0 = ∞.

A trivial example of functions g1 and g2 that satisfy the above is of course when
they are merely proportional, i.e. g1 (x) = αg2 (x) for any x, with some constant α.
Then (1.3) is trivially satisfied for K = α and for any x.
The big O notation is in particular very useful whenever we deal with approxi-
mations of functions around certain values: the error made by making the approxi-
mation can typically be written as a big O term, as we now discuss on the particular
example of the Taylor series.

1.3 Illustration of the big O notation: Taylor series


A typical example of an approximation of the value of a function around a certain
value is provided by the Taylor series (which we’ll also use later in connection with
our numerical approximations of integrals). Indeed, let’s write the Taylor series of
a function f (x) around a point x0 , for some x0 ∈ R: we have

X f (n) (x0 )
f (x) = f (x0 ) + (x − x0 )n , (1.5)
n=1
n!

where f (n) (x0 ) denotes the n-th derivative of f evaluated at the point x = x0 , i.e.

dn
f (n) (x0 ) ≡ f (x) . (1.6)
dxn x=x0

Now, let’s truncate this series at, say, the first order: by doing this we get an
approximate value of f (x), that is we write

f (x) ≈ f (x0 ) + f ′ (x0 )(x − x0 ) . (1.7)

While the full series (1.5) yields an exact value for f (x), the truncated series (1.7)
only gives an approximate value of f (x): by truncating the original series (1.5), we
make a certain error when we write (1.7). The question at this point is thus: what
is the error that we made in this case? Well, the error is here readily obtained, and
it is merely given by all the terms (x − x0 )n for n ⩾ 2 in (1.5). Therefore, if we call
g1 (x) the error that we make when we write (1.7), we have

X f (n) (x0 )
g1 (x) = (x − x0 )n , (1.8)
n=2
n!
CHAPTER 1. APPROXIMATION OF INTEGRALS AND. . . 4

so that the exact expression (1.5) of f reads

f (x) = f (x0 ) + f ′ (x0 )(x − x0 ) + g1 (x) . (1.9)

Therefore, let’s now ask the question: can we write the error g1 as a big O of
some other function g2 (x)? The answer is actually yes: setting the function g2 to be

g2 (x) = (x − x0 )2 , (1.10)

we can then show that we have

g1 (x) = O [g2 (x)] = O (x − x0 )2 . (1.11)


 

Note that we assume that it must be implicitly understood that (1.11) is valid as
x → x0 , so that we didn’t explicitly specify this.

Proof of (1.11): To show that we indeed have (1.11), we must check that the
two functions g1 (x) and g2 (x) defined by (1.8) and (1.10), respectively, satisfy the
definition (1.4) of the big O notation.
Therefore, let’s first compute the ratio g1 /g2 here, we get in view of (1.8)
and (1.10)
∞ ∞
g1 (x) 1 X f (n) (x0 ) n
X f (n) (x0 )
= 2
(x − x 0 ) = (x − x0 )n−2 ,
g2 (x) (x − x0 ) n=2 n! n=2
n!

that is, with a change of index n → n′ = n − 2 in the series in the right-hand side,
∞ ′
g1 (x) X f (n +2) (x0 ) ′
= ′
(x − x0 )n ,
g2 (x) n′ =0 (n + 2)!

and thus, relabeling the dummy index n′ merely as n and isolating the first term of
the series,

g1 (x) f (2) (x0 ) X f (n+2) (x0 )
= + (x − x0 )n . (1.12)
g2 (x) 2 n=1
(n + 2)!

Using now the triangle inequality |α + β| ⩽ |α| + |β|, we get upon taking the
absolute value of (1.12)


g1 (x) f (2) (x0 ) X f (n+2) (x0 )
⩽ + (x − x0 )n . (1.13)
g2 (x) 2 n=1
(n + 2)!
CHAPTER 1. APPROXIMATION OF INTEGRALS AND. . . 5

We can now take the limit x → x0 in (1.13), and we get


g1 (x) f (2) (x0 ) X f (n+2) (x0 )
lim ⩽ lim + lim (x − x0 )n . (1.14)
x→x0 g2 (x) x→x0 2 x→x0 (n + 2)!
n=1

Let’s separately compute the two limits in the right-hand side of (1.14): since
f (2) (x0 ) is by construction a constant, we first have

f (2) (x0 ) f (2) (x0 )


lim = ,
x→x0 2 2

and then we have for the second limit in the right-hand side of (1.14)


X f (n+2) (x0 )
lim (x − x0 )n = 0 ,
x→x0
n=1
(n + 2)!

since each term in the series vanishes for x = x0 . We hence get for (1.14)

g1 (x) f (2) (x0 )


lim ⩽ , (1.15)
x→x0 g2 (x) 2

which hence readily shows that (1.4) is indeed satisfied if we merely take the positive
constant K to be

f (2) (x0 )
K= . (1.16)
2

Therefore, by definition of the big O notation, the result (1.15) indeed ensures that
we can write (1.11). ■

Therefore, (1.11) shows that the general Taylor series (1.9) can be written as

f (x) = f (x0 ) + f ′ (x0 )(x − x0 ) + O (x − x0 )2 . (1.17)


 

Actually, more generally, it is easy to show along the same lines as above that the
Taylor series of f can be written as

N
X f (n) (x0 )
(x − x0 )n + O (x − x0 )N +1 , (1.18)
 
f (x) =
n=0
n!

for an arbitrary integer N ⩾ 1.


Comparing the first-order approximation (1.7) of f with its expression (1.17),
we hence say that the error that we make when we truncate the Taylor series of f
at the first order “is a big O of (x − x0 )2 ”, “is of order (x − x0 )2 ”, “scales as (x − x0 )2 ”
CHAPTER 1. APPROXIMATION OF INTEGRALS AND. . . 6

or else “goes in (x − x0 )2 ”.
In regards with Taylor series, let’s also state here some useful properties of the
big O notation O. We can for instance write

n, n′ ∈ Z .
h ′
i h ′
i
(x − x0 )n O (x − x0 )n = O (x − x0 )n+n , (1.19)

Furthermore, we can also typically write (somehow abusively though)

O [(x − x0 )n ] + O [(x − x0 )n ] = O [(x − x0 )n ] , (1.20)

and

αO [(x − x0 )n ] = O [(x − x0 )n ] , (1.21)

for a constant α.

From the above example of the Taylor series of f , and how the error term can
be expressed as a big O of something, it should be clear that the argument of our
notation O when we write

g1 (x) = O [g2 (x)]

gives us the x-dependence of the leading order term of g1 as x → x0 [namely (x−x0 )2


for (1.8)]. That is, it tells us how exactly g1 behaves if x is sufficiently close to x0 .
The big O notation O is used whenever approximations of some kinds are needed,
since it typically allows to quantitatively keep track of the error that we make with
our approximations. And we do a lot of approximations in physics and engineering1 !

1
Which is also typically what makes mathematicians sometimes/often angry about our way of
doing maths!
Chapter 2

General problem for numerical


integration

The general problem that we are interested in here is to numerically compute an


integral: therefore, in this chapter we set the notations that we use in the subsequent
chapters.
Our general aim is to compute the integral of a function f (x) over a finite interval
x ∈ [a, b], with a < b some finite real numbers. To this end, we introduce a so-called
partition P of the interval [a, b]. Such a partition is a set of N + 1, with N ∈ N,
discrete points, denote them by x0 , x1 , . . . , xN , i.e.

P ≡ {x0 , x1 , . . . , xN } ≡ {xj }0⩽j⩽N , (2.1)

where the xj are required to satisfy

x0 < x1 < . . . < xN , x0 = a and xN = b . (2.2)

Furthermore, we assume for simplicity that the partition P is regular, which merely
means that the points xj are regularly spaced: that is, we have

xj+1 = xj + ∆x , ∀j , (2.3)

where ∆x is a constant independent of j.


We can thus write for such a regular partition P

x1 = x0 + ∆x ,

x2 = x1 + ∆x = (x0 + ∆x) + ∆x = x0 + 2∆x ,

7
CHAPTER 2. GENERAL PROBLEM FOR NUMERICAL. . . 8

x3 = x2 + ∆x = (x0 + 2∆x) + ∆x = x0 + 3∆x ,

and so on, so that we have in general

xj = x0 + j∆x , ∀j . (2.4)

This allows us to express ∆x in terms of a, b and N : indeed, writing (2.4) for j = N


yields

xN = x0 + N ∆x ,

that is, since in view of (2.2) we have x0 = a and xN = b,

b = a + N ∆x ,

that is

N ∆x = b − a ,

and thus

b−a
∆x = . (2.5)
N
b
The idea is now to approximate the integral a
dxf (x) by a discrete sum of the
form
 b N
X −1
dx f (x) ≈ fj ∆x , (2.6)
a j=0

or, if we also explicitly write the error E that we make when we do such an approx-
imation,
 b N
X −1
dx f (x) = fj ∆x + E , (2.7)
a j=0

where the coefficients fj in both (2.6) and (2.7) must be determined. The precise
expression of these coefficients fj , as well as the expression of the error E, will depend
on the particular method that we use in order to write the approximation (2.6). In
the next chapters we discuss two different possible methods.
Chapter 3

The rectangle rule: Riemann sum


and Riemann integral
b
Here we approximate the integral a dxf (x), i.e. the area under the representative
curve of f , by a sum of areas of rectangles.

3.1 Riemann sum and Riemann integral


Considering the regular partition P introduced in the previous chapter, here we’ll
consider rectangles with basis xj+1 − xj , j = 0, 1, . . . , N − 1. In view of (2.3), each
of these rectangles hence has the same basis ∆x.
Now, while the basis of these rectangles is fixed and common to all rectangles, we
have some freedom regarding their heights. Indeed, let’s consider for concreteness
the rectangle that spans the interval between the two points xj and xj+1 . The
possible choices to fix the height of this rectangle are

• f (xj ), i.e. the value of f at the left point of the subinterval [xj , xj+1 ];

• f (xj+1 ), i.e. the value of f at the right point of the subinterval [xj , xj+1 ];

• f (ξ), with ξ any point of the subinterval [xj , xj+1 ].

For concreteness, here we stick to one definite choice, and we take the height of our
rectangle to be the value f (xj ) of f on the left of the interval. We do this for each
subinterval [xj , xj+1 ].
Therefore, let’s now write down the area Aj of our rectangle on the subinterval
[xj , xj+1 ]: we have of course

Aj = basis × height = (xj+1 − xj )f (xj ) ,

9
CHAPTER 3. THE RECTANGLE RULE: RIEMANN. . . 10

that is in view of (2.3)

Aj = f (xj )∆x , ∀j . (3.1)


b
In this case, the integral a dxf (x) is thus approximated by the sum of the areas Aj
of all these rectangles, and we have
 b N
X −1 N
X −1
dx f (x) ≈ Aj = f (xj )∆x . (3.2)
a j=0 j=0

We would get similar approximations if we choose the other possible choices for the
height of the rectangles that we consider.
The discrete sum that approximates the integral in (3.2) is called a Riemann
sum. A sum of this type (i.e. a sum of areas of rectangles) was indeed used by
Bernhard Riemann in order to build the first rigorous definition of the integral of a
function on an interval [a, b]: this approach to integration hence defines the so-called
Riemann integral.
Since (3.2) is an approximation, we hence now ask the question: what is the
error that we make when we write (3.2)?

3.2 Error of the rectangle rule


Let’s now determine the error E that we make when we make the approxima-
tion (3.2). We can do this by means of the Taylor series: indeed, note first that
we can write, since in view of (2.2) we have x0 = a and xN = b,
 b  x1  x2  xN
dx f (x) = dx f (x) + dx f (x) + . . . + dx f (x) ,
a x0 x1 xN −1

that is
 b N
X −1  xj+1
dx f (x) = dx f (x) , (3.3)
a j=0 xj

which is exact so far. Now, since we expect the approximation (3.2) to be better the
smaller is ∆x = xj+1 − xj , this suggests to write down a Taylor series of f around
xj for each integral in the right-hand side of (3.3). We have

X f (n) (xj )
f (x) = f (xj ) + (x − xj )n ,
n=1
n!
CHAPTER 3. THE RECTANGLE RULE: RIEMANN. . . 11

so that we get, upon integrating term by term,


 xj+1
dx f (x)
xj

 xj+1
" ∞
#
X f (n) (xj )
= dx f (xj ) + (x − xj )n
xj n=1
n!

 xj+1 ∞  xj+1
X f (n) (xj )
= dx f (xj ) + dx (x − xj )n
xj n=1
n! xj

∞ xj+1
f (n) (xj )

X 1
= f (xj ) [x]xxj+1 + (x − xj ) n+1
j
n=1
n! n + 1 xj


X f (n) (xj ) 1 
(xj+1 − xj )n+1 − (xj − xj )n+1 ,

= f (xj )(xj+1 − xj ) +
n=1
n! n+1

that is, since we have by definition xj+1 − xj = ∆x,


 xj+1 ∞
X f (n) (xj )
dx f (x) = f (xj )∆x + ∆xn+1 . (3.4)
xj n=1
(n + 1)!

Substituting now (3.4) into (3.3) hence yields

 b N −1
" ∞
#
X X f (n) (xj )
dx f (x) = f (xj )∆x + ∆xn+1 ,
a j=0 n=1
(n + 1)!

that is
 b N
X −1
dx f (x) = f (xj )∆x + E , (3.5)
a j=0

where E is defined by

N −1 X

X f (n) (xj ) n+1
E≡ ∆x . (3.6)
j=0 n=1
(n + 1)!

b
Comparing the approximation (3.2) of a dxf (x) with its exact expression (3.5)
hence readily shows that the quantity E defined by (3.6) is nothing but the exact
expression of the error that we make when we write the approximation (3.2).
While (3.6) gives the exact expression of the error, let’s now see if we can get a
rough idea of what it looks like, namely how it scales with ∆x, that is in other words
CHAPTER 3. THE RECTANGLE RULE: RIEMANN. . . 12

what is the leading order in ∆x of E. To do this, let’s use the big O notation O
that we introduced in chapter 1 above. First, in view of what we did in section 1.3
regarding how we can write the remaining of a Taylor series as a O of the leading
order term [see in particular (1.8) and (1.11)], here we can readily write something
similar for the series (with respect to the index n) in the right-hand side of (3.6).
Indeed, since this series has ∆x2 as its leading order, we can write

X f (n) (xj )
∆xn+1 = O ∆x2 . (3.7)

n=1
(n + 1)!

Therefore, the leading order of each individual term in the sum over j in (3.6) is
something proportional to ∆x2 : combining (3.6) with (3.7) hence allows us to write
[by treating O(∆x2 ) as something independent of the summation index j]

N −1 X
∞ N −1 −1
X f (n) (xj ) n+1 X 2 2
N
X
1 = N O ∆x2 ,
 
E= ∆x = O ∆x = O ∆x
j=0 n=1
(n + 1)! j=0 j=0

and here we must be careful to not naively apply the rule (1.21) because here the
factor N actually depends on ∆x in view of (2.5), namely N = (b − a)/∆x, so that
we get

b−a
O ∆x2 ,

E=
∆x

where we can now apply the rules (1.19) and (1.21) to get

b−a
O ∆x2 = (b − a) (∆x)−1 O ∆x2 = (b − a)O (∆x) = O (∆x) ,
  
E=
∆x

and thus

E = O (∆x) . (3.8)

Therefore, the error E that we make when we make the approximation (3.2) scales
in ∆x: this means that the smaller we choose our ∆x, the better becomes our
approximation (3.2).

REMARK: we can however not make ∆x arbitrarily small in practice: indeed,


computers always have their own limitations in the precision with which they can
represent numbers. In particular, every computer has a precision ϵ > 0, which
means that any number that is smaller (in magnitude) than ϵ will be treated by the
computer to be zero. In addition, we must always be careful about how the computer
handles small and/or large numbers in algebraic operations such as multiplication
CHAPTER 3. THE RECTANGLE RULE: RIEMANN. . . 13

and division: so-called round-off errors can quickly build up and propagate.

Here we studied a first way of approximating an integral: by a sum of areas of


rectangles. But of course rectangles are probably the simplest geometrical figures
that we could think of: can we get better approximations if we take other geometrical
figures?
Chapter 4

The trapezoidal rule


b
Let’s now approximate a dxf (x) not as a sum of areas of rectangles as we did in
the previous chapter, but rather as a sum of areas of trapezoids.
Let’s focus on the interval [xj , xj+1 ], for some j = 0, 1, . . . , N − 1. We then
take this interval as the basis of a trapezoid, whose upper part then connects the
points f (xj ) and f (xj+1 ), say for definiteness with f (xj ) < f (xj+1 ) [though what
we say below of course also applies to the case f (xj ) > f (xj+1 )]. The area Aj of
this trapezoid is easily obtained by noting that it is the mere sum of the areas of a
rectangle of height f (xj ) and of a right triangle of height f (xj+1 ) − f (xj ), that is

Aj = area(rectangle) + area(right triangle)

1
= f (xj )∆x + [f (xj+1 ) − f (xj )] ∆x
2
 
1 1
= f (xj ) + f (xj+1 ) − f (xj ) ∆x ,
2 2

that is

1
Aj = [f (xj ) + f (xj+1 )] ∆x , ∀j . (4.1)
2
b
Therefore, the integral a dxf (x) is here approximated by the sum of the areas Aj
of all these trapezoids, and we have
 b N −1 N −1
X X f (xj ) + f (xj+1 )
dx f (x) ≈ Aj = ∆x . (4.2)
a j=0 j=0
2

b
This approximation (4.2) of the integral a dxf (x) is called the trapezoidal rule.
We can show that the error E that we make when we make the approxima-

14
CHAPTER 4. THE TRAPEZOIDAL RULE 15

tion (4.2) scales as ∆x2 , that is

E = O(∆x2 ) . (4.3)

Comparing (4.3) with (3.8) hence readily shows that the error in the case of the
trapezoidal rule is now one order of magnitude smaller as compared to the error
made when using the rectangle rule: namely, the error is of order ∆x2 for the
trapezoidal rule rather than ∆x for the rectangle rule.

You might also like