Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
35 views95 pages

Numerical I Module-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views95 pages

Numerical I Module-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 95

CHAPTER ONE

BASIC CONCEPTS IN ERROR ESTIMATION

1.1. Introduction

Numerical Analysis is the area of mathematics that creates, analyzes, and implements algorithms
for solving numerically the problems of continuous mathematics. Such problems originate
generally from real world applications of algebra, geometry, and calculus , and they involve
variables which varies continuously; these problems occurs throughout the natural sciences,
social sciences, engineering, medicine, and business.
During the past half century, the growth in power and availability of digital computers has led to
an increasing use of realistic mathematical models in science and engineering, and numerical
analysis of increasing sophistication has been needed to solve these more detailed mathematical
models of the real world problems.
Thus to solve any real life problem by using numerical methods the following three steps are
mostly considered:
 Converting the real life (or physical) problem in to a mathematical model.
 Apply an appropriate numerical method which can solve the mathematical model and
develop an algorithm for the method.
 Finally, implement the algorithm on computational tools (most commonly on
computers) to compute the required result.
More often all the above steps are exposed to an error due to different assumptions and
limitations. So most numerical methods give answers that are only approximations to the desired
true solution, and it's important to understand and to be able, if possible, to estimate or bound the
resulting error. Therefore, the study of errors is the central concern of numerical analysis.
This chapter examines the various sources and types of errors that may occur in a problem. The
representation of numbers in computers is examined, along with the error in computer arithmetic.
General results on the propagation of errors in a calculation are also considered. Finally, the
concept of stability and conditioning of problems on numerical methods are introduced and
illustrated.

1
1.2.Errors

One of the most important aspect of numerical analysis is the study of errors; because, errors
occur at any stage of the process of solving problems numerically. By the error we simply mean
that the difference between the true (or exact) value and the approximate value. Therefore;

Error= True value -Approximate value

Whenever, we solve any problem using numerical analysis, errors will arise during the
calculations. To be able to deal with the issue of errors, we need to

 identify where the error is coming from


 quantifying the error
 Minimize the error as per our needs.

1.2.1. Sources of Errors

Errors in a solution of a problem are due to the following sources:


i. To solve any physical problem, mathematical models are formulated to describe
them, but these models do not represent the problems exactly because different
assumptions and inexact initial data's are used in the development of the models,
as a result errors are induced.
ii. The numerical methods used to solve the mathematical models are often not
exact, because this methods replaces an infinite process in to finite, as a
consequence errors arises.
iii. Computation of the final result is done using computational tools like tables,
calculators, computers, and etc (but mostly by implementing the algorithms of the
numerical methods on different computer packages ) but these tools have limited
space; because of this only limited part of the results are considered, thus errors
are induced.

2
1.2.2. Measuring Errors

Suppose is the exact value and ̃ is the approximate value of , thus the error occurred on
such approximation can be measured using one of the following ways depending on the accuracy
required.
a) True Error
True error denoted by E t is the difference between the true value (also called the exact value)

and the approximate value ̃.


True Error  True value – Approximate value
̃

Example 1.1: The derivative of a function f (x ) at a particular value of x can be approximately


calculated by

f ( x  h)  f ( x )
f ( x) 
h
For f ( x)  7e 0.5 x
and h  0.3 , find
a) The approximate value of f ( 2)
b) The true value of f ( 2)
c) The true error for part (a)

f ( x  h)  f ( x )
Solution: a) f ( x) 
h
For x  2 and h  0.3 ,
f (2  0.3)  f (2) f (2.3)  f (2) 7e 0.5( 2.3)  7e 0.5( 2)
f (2)   
0 .3 0.3 0.3

22.107  19.028
  10.265
0.3

b) The exact value of f ( 2) can be calculated by using our knowledge of differential

calculus.

f ( x)  7e 0.5 x
f ' ( x)  7  0.5  e0.5 x  3.5e 0.5 x
So the true value of f ' ( 2) is
f ' (2)  3.5e 0.5( 2)  9.5140

3
c) True error is calculated as
E t = True value – Approximate value
 9.5140  10.265  0.75061

b) Absolute True Error


Absolute true error which is denoted by is the absolute value of the true error, that means:
Absolute True Error  | True value – Approximate value|
| | | ̃|
Since when we talk about error the main focus lies on the magnitude of the error than the sign,
absolute errors are used in place of actual errors.
In the above Example 1.1 the absolute error becomes
| | |  0.75061|

The magnitude of true error does not show how bad the error is. An absolute true error of
E A  0.75061 may seem to be small, but if the function given in the Example 1.1 were

f ( x)  7  106 e 0.5 x , the absolute true error in calculating f ( 2) with h  0.3, would be

E A  0.75061 106. This value of absolute true error is smaller, even when the two problems
are similar in that they use the same value of the function argument, x  2 and the step size,
h  0.3 . This brings us to the definition of relative true error.
c) Relative True Error
Relative true error is denoted by R and is defined as the ratio between the true error and the

True Error
true value. 
True Value
̃

Example 1.2
For the problem in Example 1.1 above, find the relative true error at x  2 .
Solution
E t = True value – Approximate value

 9.5140  10.265  0.75061

4
Relative true error is calculated as
True Error  0.75061
t    0.078895
True Value 9.5140
Relative true errors are also presented as percentages. For this example,
t  0.0758895 100%  7.58895%

Absolute relative true errors may also need to be calculated. In such cases,
t | 0.075888| = 0.0758895= 7.58895%

d) Approximate Error
In the previous section, we discussed how to calculate true errors. Such errors are calculated
only if true values are known. An example where this would be useful is when one is checking if
a program is in working order and you know some examples where the true error is known. But
mostly we will not have the luxury of knowing true values as why would you want to find the
approximate values if you know the true values. So when we are solving a problem numerically,
we will only have access to approximate values. We need to know how to quantify error for such
cases.
Approximate error is denoted by E a and is defined as the difference between the present
approximation and previous approximation.
Approximate Error  Present Approximation – Previous Approximation
Example1. 3
For the problem in Example 1.1 above, find the following
a) f ( 2) using h  0.3
b) f ( 2) using h  0.15
c) approximate error for the value of f ( 2) for part (b)
Solution
a) The approximate expression for the derivative of a function is
f ( x  h)  f ( x )
f ' ( x)  .
h
For x  2 and h  0.3 ,
f (2  0.3)  f (2) f (2.3)  f (2) 7e 0.5( 2.3)  7e 0.5( 2)
f ' (2)   
0.3 0.3 0.3
22.107  19.028
  10.265
0.3

5
b) Repeat the procedure of part (a) with h  0.15,
f ( x  h)  f ( x )
f ( x) 
h
For x  2 and h  0.15 ,
f (2  0.15)  f (2) f (2.15)  f (2) 7e 0.5( 2.15)  7e 0.5( 2 )
f ' ( 2)   
0.15 0.15 0.15
20.50  19.028
  9.8799
0.15
c) So the approximate error, E a is
E a  Present Approximation – Previous Approximation

 9.8799  10.265  0.38474


The magnitude of approximate error does not show how bad the error is. An approximate error
of E a  0.38300 may seem to be small; but for f ( x)  7  106 e 0.5 x , the approximate error in

calculating f ' (2) with h  0.15 would be Ea  0.38474106 . This value of approximate error
is smaller, even when the two problems are similar in that they use the same value of the
function argument, x  2 , and h  0.15 and h  0.3 . This brings us to the definition of relative
approximate error.

e) Relative Approximate Error


Relative approximate error is denoted by a and is defined as the ratio between the approximate
error and the present approximation.
Approximat e Error
Relative Approximate Error 
Present Approximat ion
Example 1.4
For the problem in Example 1.1 above, find the relative approximate error in calculating f ( 2)
using values from h  0.3 and h  0.15 ?
Solution
From Example1.3, the approximate value of f (2)  10.263 using h  0.3 and f ' (2)  9.8800
using h  0.15 .
E a  Present Approximation – Previous Approximation

 9.8799  10.265  0.38474

6
The relative approximate error is calculated as
Approximat e Error  0.38474
a    0.038942
Present Approximat ion 9.8799
Relative approximate errors are also presented as percentages. For this example,
a  0.038942 100% =  3.8942%

Absolute relative approximate errors may also need to be calculated. In this example
a  | 0.038942|  0.038942 or 3.8942%

f) The Limiting Errors


In a numerical method that uses iterative methods, a user can calculate relative approximate
error a at the end of each iteration. The user may pre-specify a minimum acceptable tolerance

called the pre-specified tolerance, s . If the absolute relative approximate error a is less than

or equal to the pre-specified tolerance s , that is, |a | s , then the acceptable error has been
reached and no more iterations would be required.
Alternatively, one may pre-specify how many significant digits they would like to be correct in
their answer. In that case, if one wants at least m significant digits to be correct in the answer,
then you would need to have the absolute relative approximate error, |a | 0.5  102m %.

Example1.5
If one chooses 6 terms of the Maclaurin series for e x to calculate e 0.7 , how many significant
digits can you trust in the solution? Find your answer without knowing or using the exact
answer.
2
Solution: e x  1  x  x  .......... .......
2!
Using 6 terms, we get the current approximation as
0.7 2 0.7 3 0.7 4 0.7 5
e 0.7
 1  0.7      2.0136
2! 3! 4! 5!
Using 5 terms, we get the previous approximation as
0.7 2 0.7 3 0.7 4
e 0.7  1  0.7     2.0122
2! 3! 4!
The percentage absolute relative approximate error is

7
2.0136  2.0122
a   100  0.069527%
2.0136

Since a  0.5  1022 % , at least 2 significant digits are correct in the answer of

e 0.7  2.0136
Significant digits
Significant digits are important in showing the truth one has in a reported number.
to signify the correct number of significant digits
Example1.6
Give some examples of showing the number of significant digits.
Solution
a) 0.0459 has three significant digits
b) 4.590 has four significant digits
c) 4008 has four significant digits
d) 4008.0 has five significant digits
e) 1.079  103 has four significant digits
f) 1.0790 103 has five significant digits
g) 1.07900 103 has six significant digits
Exercise 1.1:
1. Let be the value of the number and its approximate value is ̌ ,
then find
a) The true error
b) the absolute true error
c) the relative true error
d) both the percentage absolute and relative errors

2. For the question in Example 1.1 above compute the approximate value of for
and then find
a) the approximate absolute error
b) the approximate relative error
c) and compare this errors with their corresponding errors in example 1.3 and 1.4
3. Let and are the exact measurements and if ̌ and

8
̌ are their respective approximations, then compare the significance of the two
errors by using
a) absolute true error
b) absolute relative error
c) percentage absolute error and percentage relative error

1.2.3. Classification of Errors

The errors induced by the different sources mentioned above are broadly classified as the
following three types of errors:
1) Inherent Errors: are errors which occur in the development of the mathematical model
for a given physical problem. These types of errors are mostly unavoidable, and they are
caused due to:
 The approximate value of the initial data.
 The different assumption taken on the model.
 The limitations of the computing aids
Even if such errors are beyond the control of the numerical analyst, they can be
minimized by selecting:
 A better initial data.
 A better mathematical model that represent the problem.
 Computing aids of higher precisions.
2) Truncation ( or numerical ) Errors
Truncation error is defined as the error caused by truncating (or cutting) an infinite
mathematical procedure in to a finite one. For example, the Maclaurin series for e x is given as
x2 x3
ex  1 x    .......... ..........
2! 3!
This series has an infinite number of terms but when using this series to calculate e x , only a
finite number of terms can be used. For example, if one uses three terms to calculate e x , then
x2
e  1 x  .
x

2!
the truncation error for such an approximation is

9
 x2 
Truncation error = e x  1  x  ,
 2! 

x3 x4
   .......... .......... ...
3! 4!
But, how can truncation error be controlled in this example? We can use the concept of relative
approximate error to see how many terms need to be considered. Assume that one is calculating
e1.2 using the Maclaurin series, then

1.2 2 1.2 3
e1.2  1  1.2    .......... .........
2! 3!

Let us assume one wants the absolute relative approximate error to be less than 1 %. In Table 1,
we show the value of e1.2 , approximate error and absolute relative approximate error as a
function of the number of terms, n .

| |

Using 6 terms of the series yields a a < 1%.

3) Rounding Error
1
A computer can only represent a number approximately. For example, a number like may be
3
represented as 0.333333 on a PC. Then the round off error in this case is:
1
 0.333333 0.00000033 .
3

Then there are other numbers that cannot be represented exactly. For example,  and 2 are
numbers that need to be approximated in computer calculations.

10
1.3.Propagation of Errors

If a calculation is made with numbers that are not exact, then the calculation itself will have an
error. How do the errors in each individual number propagate through the calculations. Let’s
look at the concept via some examples.
Example 1.7
Find the bounds for the propagation error in adding two numbers. For example if one is
calculating X  Y where
X  1.5  0.05,
Y  3.4  0.04 .
Solution
By looking at the numbers, the maximum possible value of X and Y are
X  1.55 and Y  3.44
Hence
X Y  1.55  3.44  4.99
is the maximum value of X  Y .
The minimum possible value of X and Y are
X  1.45 and Y  3.36 .
Hence
X  Y  1.45  3.36
 4.81
is the minimum value of X  Y .
Hence
4.81  X  Y  4.99.
1.4.Stability of Algorithms and Conditioning numbers
Since we must live with errors in our numerical computations, the next question natural is
regarding appraisal of a given computed solution: In view of the fact that the problem and the
numerical algorithm both yields error, can we trust the numerical solution of a nearby problem
(or the same problem with slightly different data) to differ by only a little from our computed
solution? A negative answer could make our computed solution meaningless.

11
This question can be complicated to answer in general, and it leads to notations such as problem
sensitivity and algorithm stability. If the problem is too sensitive, or ill-conditioned, meaning
that even a small perturbation in the data produces a large difference in the result.

1.4.1. Stability of Algorithms

Definition: An algorithm is an unambiguous and precise description of operations executed on


input data in a finite number of steps, transforming it to the desired output.
Definition: A propagated error is an error that occurs during the implementation of the steps of
an algorithm. Suppose a given algorithm has an initial error :
i) The propagated error generated will have a linear growth if the final error induced after
applying the algorithm is a constant multiple of the initial error, i.e.,

ii) The growth is exponential if


, where n is the number of steps in the algorithm.
Linear growth is usually avoidable and when & are small the results are generally
acceptable, as a consequence, an algorithm that exhibits linear growth is called a stable
algorithm.
Exponential growth of error should be avoided, since the term becomes large even for
relatively small values of n, this leads to unacceptable result regardless of . An algorithm
which exhibits such an error is called unstable algorithm.

Figure 1.2 graphs of stable and unstable algorithms.

12
1.4.2. Conditioning or Condition of a Problem

The words condition and conditioning are used informally to indicate how sensitive the solution
of a problem may be to small relative change in the input data. The condition of a numerical
problem is a qualitative or quantitative statement about how it is to solve, irrespective of the
algorithm used to solve it.
As a qualitative example, consider the solution of two simultaneous linear equations. The
problem may be described graphically by the pair of straight lines representing each equation:
the solution is then the point of intersection of the lines.

well Conditioned ill-conditioned.


Figure 1.2 graph of solution of two simultaneous linear equations
The left-hand problem is easier to solve than the right hand one, irrespective of the graphical
algorithm used. For example, a better (or worse) algorithm is to use a sharper (or blunter) pencil:
but in any case it should be possible to measure the coordinates of the solution more exactly in
the left-hand case than the right.
Quantitatively, the condition number K of a problem is a measure of the sensitivity of the
problem to a small perturbation or change. If this number is large, it indicates the problem is ill-
conditioned problem; in contrast, if the number is modest, the problem is recognized as a well-
conditioned problem.
For example, we can consider the problem of evaluating a differentiable function . Let ̃ be a
point close to . In this case K is a function of defined as the relative change in caused
by a unit relative change in . That is
|[ ̃ ] |
̃ | ̃ |
| ̃ |
| | ̃ | ̃|

| |

13
Example 1.14
Suppose √ . We get
[ √ ]
| |

So K is a constant which implies that taking square roots is equally well conditioned for all
non-negative, and that the relative error is reduced by half in the process.
Example 1.15
Suppose . In this case we get

[
| | | | | |

So can get arbitrarily large for values of close to and can be used to measure the
relative error in for such values, e.g. if then the relative error will increase
by a factor of about .
Review Exercise

1. Round-off the following numbers to four decimal places.


a) 0.235082 b) 0.0022218 c) 4.50089 d) 2.36425 e) 1.3456
2. The following numbers are correct to the last digits, find the sum.
i) 2.56, 4.5627, 1.253, 1.0534
ii) 1.3526, 2.00462, 1.532, 28.201, 31.0012
3. Find the relative error in computation of for and having
absolute errors and .
4. Find the relative error in computation of for and having
absolute errors and respectively.
5. If , find the percentage error in at , if the error is .

6. If be represented approximately by , find both relative error and percentage

error.
7. If , find the relative percentage error in for , if the error
is .
8. Determine the number of correct digits in the number given its relative error .
i) , .

14
ii) , .
iii) , .
9. Evaluate √ √ correct to three significant digits.

10. If is approximated to . Find

i) absolute error
ii) relative error and
iii) percentage error

11. If and error in be , compute the relative error

in . Where .
12. If the true value of a number is and is its approximate value; find the
absolute error, relative error and the percentage error in the number.
13. If , , , and
then find the maximum value of the absolute error in
i)
ii)
iii)
14. If
where the coefficients are rounded off find the relative and absolute error in when
.
15. If and , where denote the length and breadth of a
rectangular plate, measured accurate up to , find the error in computing the area.

15
CHAPTER TWO
SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS

2.1. Introduction:
Consider the equation
f ( x)  0 (2.1)
which may be given explicitly as a polynomial of degree n in x or f (x ) may be defined
implicitly as a transcendental function. An equation which contains polynomials, exponential
functions, logarithmic functions, trigonometric functions etc. is called a transcendental equation.
Finding one or more roots of Eq. (2.1) is one of the more commonly occurring problems of
applied mathematics. Since there is no general formula for the solution of polynomial equations,
no general formula will exist for the solution of an arbitrary nonlinear equation of the form Eq.
(2.1) where is a continuous real-valued function.
Definition 2.1: A number  is the solution of f ( x )  0 if f ( )  0. Such a solution  is a root
or a zero of f ( x )  0. Geometrically the root of the equation Eq. (2.1) is the value
of x at which the graph of y  f (x) intersects the x  axis.
Definition 2.2: (Simple root) A number  is a simple root of f ( x )  0 if f ( )  0. and

f ' ( )  0. Then, we can write f (x ) as f ( x)  ( x   ) g ( x), g ( )  0.

For example, since ( x  1) is a factor of f ( x)  x 3  x  2  0, we can write

f ( x)  ( x  1)( x 2  x  2)  ( x  1) g ( x), g (1)  0.

Alternately, we find f 1  0 , f ' x   3x 2  1, , f ' 1  4  0. Hence, x  1 is a simple root of

f ( x)  x 3  x  2  0.
Definition 2.3: (Multiple root) A number  is a multiple root, of multiplicity m, of f ( x )  0 , if

f ( )  0, f ' ( )  0,..., f ( m1) ( )  0, and f m    0 . Then, we can write f (x ) as

f  x   ( x   ) m g ( x), g ( )  0.

For example, consider the equation f ( x)  x 3  3x 2  4  0. We find

f ( 2) = 8 – 12 + 4 = 0, f '  x   3x 2  6 x, f ' ( 2)  12 – 12 = 0, f ' ' x   6 x  6, f ' ' 2  6  0.

Hence, x  2 is a multiple root of multiplicity 2 (double root) of f ( x)  x 3  3x 2  4  0.

We can write f ( x)  ( x  2) 2 ( x  1)  ( x  2) 2 g ( x), g (2)  3  0.

16
2.2. Solution of Non-linear Equations:

2.2.1. Graphical Methods:

One alternative to obtain an approximate solution is to plot the function and determine where it
cross the x-axis. If the equation f ( x )  0 can be conveniently written in the form f1 ( x)  f 2 ( x),

then the point of intersection the graph of y  f1 ( x) and y  f 2 ( x) gives the root of f ( x )  0.

Example 2.1: Determine the approximate solution of f ( x)  e  x cos x  x  0.

Solution: Let f1 ( x)  e  x cos x and f 2 ( x)   x Such that f ( x)  f1 ( x)  f 2 ( x)

Draw the graphs of f1 ( x) and  f 2 ( x) on the same coordinate axis.

1  f 2 ( x)  x
0.5
f1 ( x)  e  x cos x
(0,0) 0.5 1
Figure 2.1
0.5 is approximate solution of f ( x)  0

Remark: Graphical techniques are of limited practical value because they are not precise.
However, graphical methods can be utilized to obtain rough estimates of the roots. These
estimates can be employed as starting guesses for numerical methods which will be discussed in
the next sections.
2.2.2 Bisection method:

This method is based on the repeated application of The Intermediate Value Theorem.
Suppose f is a continuous function defined on the interval [ a, b], with f (a ) and f (b)
of opposite sign. The Intermediate Value Theorem implies that a number m exists in ( a , b )
with f ( m)  0. Although the procedure will work when there is more than one root in the
interval (a, b), we assume for simplicity that the root in this interval is unique. The method
calls for a repeated halving (or bisecting) of subintervals of [ a, b] and, at each step, locating
the half containing m .

17
a1  b1
To begin, set a1  a and b1  b and let m1 be the midpoint of [ a, b]; that is, m1 
2
• If f (m1 )  0, then m  m1 , and we are done.

• If f (m1 )  0, then f (m1 ) has the same sign as either f (a1 ) or f (b1 ).

• If f (m1 ) and f (a1 ) have the same signs, then m (m1 , b1 ). Set a2  m1 and b2  b1.

• If f (m1 ) and f (a1 ) have opposite signs, then m (a1 , m1 ). Set a2  a1 and b2  m1.

Then reapply the process to the interval [a2 , b2 ]. After repeating the bisection process a number
of times, we either find the root or find subinterval which contains the root. We take the
midpoint of the last subinterval as an approximation to the root.
The method is shown graphically in the Fig. 2.2
y (b, f (b))

b m2 m3 m1 a x

( a, f ( a ))

Figure 2.2
Graphical representation of bisection method.

Example 2.2: Find the interval in which the smallest positive root of the equation
f ( x)  x 3  x  4  0 lies. Determine the root correct to two decimal places using the bisection
method.
Solution:
For f ( x)  x 3  x  4, we find f (0)  4, f (1)  4, f ( 2)  2.

Therefore, the root lies in the interval (1,2). The sequence of intervals given in the table 2.1

18
n an bn mn f ( mn ) f ( a n )

1 1 2 1.5 >0
2 1.5 2 1.75 >0
3 1.75 2 1.875 <0
4 1.75 1.875 1.8125 >0
5 1.75 1.8125 1.78125 >0
6 1.78125 1.8125 1.796875 <0
7 1.78125 1.796875 1.7890625 >0
8 1.7890625 1.796875 1.792969 >0
9 1.792969 1.796875 1.794922 >0
10 1.794922 1.796875 1.795898 > 0.
After 10 iterations, we find that the root lies in the interval (1.795898, 1.796875).Therefore,
the approximate root is m = 1.796387. The root correct to two decimal places is 1.80.
Example 2.3: Show that f ( x)  x 3  4 x 2  10  0. has a root in [1, 2] and use the Bisection
method to determine an approximation to the root that is accurate to at least within 10−4.
Solution: Because f (1)  5, f ( 2)  14 the Intermediate Value Theorem ensures that this
continuous function has a root in [1, 2].
For the first iteration of the Bisection method we use the fact that at the midpoint of
[1,2] we have f (1.5)  2.375  0. This indicates that we should select the interval [1,1.5]
for our second iteration. Then again take the mid point of [1,1.5] , we find that
f (1.25)  1.796875 0. so our new interval becomes [1.25,1.5], whose midpoint is 1.375.
Continuing in this manner gives the values in Table 2.4.
n an bn mn f ( mn )

1 1.0 2.0 1.5 2.375


2 1.0 1.5 1.25 -1.79687
3 1.25 1.5 1.375 0.16211
4 1.25 1.375 1.3125 −0.84839
5 1.3125 1.375 1.34375 −0.35098
6 1.34375 1.375 1.359375 −0.09641

19
7 1.359375 1.375 1.3671875 0.03236
8 1.359375 1.3671875 1.36328125 −0.03215
9 1.36328125 1.3671875 1.365234375 0.000072
10 1.36328125 1.365234375 1.364257813 −0.01605
11 1.364257813 1.365234375 1.364746094 −0.00799
12 1.364746094 1.365234375 1.364990235 −0.00396
13 1.364990235 1.365234375 1.365112305 −0.00194
After 13 iterations, m13  1.365112305 approximates the root m with an error

m  m13  b14  a14  |1.365234375 − 1.365112305| = 0.000122070.


m  m13 b14  a14
Since a14  m , we have < ≤ 9.0 × 10−5,
m a14
So, the approximation is correct to at least within 10−4. The correct value of m to nine decimal
places is m  1.365230013. Note that m9 is closer to m than is the final approximation m13 .
The Bisection method, though conceptually clear, has significant drawbacks. It is relatively

slow to converge (that is, n may become quite large before m  mn is sufficiently small), and a

good intermediate approximation might be inadvertently discarded. However, the method has the
important property that it always converges to a solution.
Theorem 2.1: Let f be continuous on [ a, b] and f (a ) f (b)  0. Then the Bisection method

generates a sequence mn n1 approximating the root  with the property

ba
mn    , when n  1.
2n
Proof: For each n  1, we have
ba
bn  a n  and  (an , bn ). (2.14)
2 n 1
a n  bn
Since mn  for all n  1,
2
bn  a n b  a
it follows that mn     n .
2 2
Example 2.4: Determine approximately how many iterations are necessary to solve
f ( x)  x 3  4 x 2  10  0 with accuracy of ε =10-5 for a  1 and b  2.

20
ba
Solution: This requires finding an integer n that will satisfies mn    
2n
2 1
i.e. mn    n
 2 n  105
2
To determine n we use logarithms to the base 10
n 5

10  5
2
Since 2  n  105  log10 log10

  n log10
2
 5
5
 n> 2
 16.6 .
log10
It would appear to require 17 iterations to obtain an approximate accuracy to 10-5.
REMARK: If an error tolerance ε is prescribed, then the approximate number of the iterations
[log(b  a)  log  ]
required may be determined from the relation n  .
log 2
Exercise:
1. Determine the number of iterations necessary to solve f ( x)  x 3  4 x 2  10  0 with
accuracy 10−3 using a  1 and b  2.

2. Perform five iterations of the bisection method to obtain the smallest positive root of the
following equations
i) x 5  4 x  2  0 ii ) cos x  3x 1 iii) x 3  2 x 2  1  0 iv) 5 x 3  20 x 2  3  0

3. Find the root of the equation sin x  1  x 3 , which lies in the interval ( 2,1), correct to
three decimal places.
4. Use the Bisection method to find solutions, accurate to within 10−5 for the following equations
i ) 3 x  e x  0 for 1  x  2 ii ) 2 x  3 cos x  e x  0 for 0  x  1

2.2.3 Method of False Position:

The method is also called linear interpolation method or chord method or regula-falsi method.
At the start of all iterations of the method, we require the interval in which the root lies. Let the
root of the equation f ( x)  0, lie in the interval ( xk 1 , xk ), that is, f k 1 f k  0, where

f ( xk 1 )  f k 1 , and f ( xk )  f k . Then , P( xk 1 , f k 1 ) and Q( xk , f k ) are points on the curve

21
f ( x )  0. Draw a straight line joining the points P and Q (see Fig. 2.3). The line PQ is taken as

an approximation of the curve in the interval [ xk 1 , xk ]. The equation of the line PQ is given by

y  fk x  xk

f k 1  f k xk 1  xk
The point of intersection of this line PQ with the x  axis is taken as the next approximation
to the root. Setting y  0, and solving for x, we get

xk 1  xk x  xk 1
x  xk  ( ) f k  xk  ( k ) fk
f k 1  f k f k  f k 1
The next approximation to the root is taken as
xk  xk 1
xk 1  xk  ( ) fk
f k  f k 1
Simplifying, we can also write the approximation as

xk 1 f k  xk f k 1
xk 1 
f k  f k 1 k  1,2,3,... (2.15)

Therefore, starting with the initial interval ( x0 , x1 ), in which the root lies, we compute

x0 f1  x1 f 0
x2 
f1  f 0

Now, if f ( x0 ) f ( x2 )  0, then the root lies in the interval ( x0 , x2 ). Otherwise, the root lies in

the interval ( x2 , x1 ). The iteration is continued using the interval in which the root lies, until
the required accuracy criterion satisfied.
The method is shown graphically by

y P
x3
x0 x2 X

x1
Q
Figure 2.3: Method of false position

22
Remark: i) At the start of each iteration, the required root lies in an interval, whose length is
decreasing. Hence, the method always converges.
ii) The method of false position has a disadvantage. If the root lies initially in the
interval ( x0 , x1 ), then one of the end points is fixed for all iterations.
Example 2.5: Locate the intervals which contain the smallest positive real roots of the equation
x 3  3 x  1  0. Obtain these roots correct to three decimal places, using the
method of false position.
Solution: We form the following table of values for the function f (x ).
x 0 1 2 3

f (x ) 1 -1 3 19

Table 2.5
There is one positive real root in the interval (0,1). and another in the interval (1,2).
There is no real root for x  2 as f ( x )  0, for all x  2.
we find the root in (0,1). We have
x0  0, x1  1, f 0  f ( x0 )  f (0)  1, f1  f ( x1 )  f (1)  1.

x0 f1  x1 f 0 0 1
x2    0.5 , f ( x2 )  f (0.5)  0.375.
f1  f 0 1 1

Since, f (0) f (0.5)  0, the root lies in the interval (0,0.5).

x0 f 2  x2 f 0 0  0.5(1)
x3    0.36364 , f ( x3 )  f (0.36364)  0.04283.
f2  f0  0.375  1

Since f (0) f (0.36364)  0, the root lies in the interval (0,0.36364).

x0 f 3  x3 f 0 0  0.36364(1)
x4    0.34870 , f ( x4 )  f (0.34870)  0.00370.
f3  f0  0.04283 1

Since f (0) f (0.34870)  0, the root lies in the interval (0,0.34870).

x0 f 4  x4 f 0 0  0.3487(1)
x5    0.34741
f4  f0  0.00370 1 23
, f ( x5 )  f (0.34741)  0.00030.

Since f (0) f (0.34741)  0, the root lies in the interval (0,0.34741).

x0 f 5  x5 f 0 0  0.34741(1)
x6   0.347306
f5  f0  0.0003  1

Now , x6  x5  | 0.347306 – 0.34741 | ≈ 0.0001 < 0.0005.

The root has been computed correct to three decimal places. The required root can be
taken as x  x6  0.347306. We may also give the result as 0.347, even though x6 is more

accurate. Note that the left end point x  0 is fixed for all iterations.
Example 2.6: Find the root correct to two decimal places of the equation, cos x  xe x ,
using the method of false position.
Solution: Define f ( x)  cos x  xe x  0. There is no negative root for the equation. We have
f (0)  1, f (1)  2.17798.

Since, f (0) f (1)  0, the root lies in the interval (0,1).

x0 f1  x1 f 0 0  1(1)
x2    0.31467 , f ( x2 )  f (0.31467)  0.51986.
f1  f 0  2.177985 1

Since, f (0.31467) f (1)  0, the root lies in the interval (0.31467,1).

x2 f1  x1 f 2 0.31467(2.17798)  1(0.51986)
x3    0.44673 , f ( x )  f (0.44673)  0.20354.
f1  f 2  2.17798 0.51986 3

Since, f (0.44673) f (1)  0, the root lies in the interval (0.44673,1).

x3 f1  x1 f 3 0.44673(2.17798)  91)  (1)0.20354


x4    0.49402 , f ( x4 )  f (0.49402)  0.07079.
f1  f 3  2.17798 0.20354

Since, f (0.49402) f (1)  0, the root lies in the interval (0.49402,1).

, f ( x5 )  f (0.50995)  0.02360.
x f x f 0.49402(2.17798)  (1)0.07079
x5  4 1 1 4   0.50995
f1  f 4  2.17798 0.07079
Since, f (0.50995) f (1)  0, the root lies in the interval (0.50995,1).

24
x5 f1  x1 f 5 0.50995(2.17798)  (1)0.02360
x6    0.51520
f1  f 5  2.17798 0.02360 , f ( x6 )  f (0.51520)  0.00776.
Since, f (0.51520) f (1)  0, the root lies in the interval (0.51520,1).

x6 f1  x1 f 6 0.51520(2.17798)  (1)0.00776
x7    0.51692
f1  f 6  2.17798 0.00776

Now , x7  x6  | 0.51692 – 0.51520 | 0.00172 < 0.005.

The root has been computed correct to two decimal places. The required root can be
taken as x  x7  0.51692.

Note that the right end point x  2 is fixed for all iterations.
Exercise
In the following problems, find the root as specified using the regula-falsi method.
1. Find the positive root of x 3  2 x  5. (Do only four iterations).
2. Find an approximate root of , correct to three decimal places.
3. Solve the equation , x tan x  1, starting with a  2.5, and b  3, correct to three decimal places.

4. Find the smallest positive root of , x  e  x  0, correct to three decimal places.

5. Find the smallest positive root of x 4  x  10  0, correct to three decimal places.

2.2.4. Newton-Raphson Method:


This method is also called Newton’s method.
Let x0 be an initial approximation to the root of f ( x)  0. Then , P( x0 , f 0 ), where f 0  f ( x0 ), is

a point on the curve. Draw the tangent to the curve at P , (see Fig.2.4). We approximate the
curve in the neighborhood of the root by the tangent to the curve at the point P. The point of
intersection of the tangent with the x  axis is taken as the next approximation to the root. The
process is repeated until the required accuracy is obtained. The equation of the tangent to the
curve y  f (x) at the point , P( x0 , f 0 ), is given by

y  f ( x0 )  ( x  x0 ) f ' ( x0 )
where f ' ( x0 ) is the slope of the tangent to the curve at P.

Setting y  0 and solving for x, we get

25
f ( x0 )
x  x0  , f ' ( x0 )  0
f ' ( x0 )
The next approximation to the root is given by
f ( x0 )
x1  x0  , f ' ( x0 )  0
f ' ( x0 )
We repeat the procedure. The iteration method is defined as
f ( xk )
xk 1 = x k  , f ' ( xk )  0 k  0,1,2,... (2.16)
f ( xk )

Which is the Newton-Raphson formula.

Alternate derivation of the method


Let x k be an approximation to the root of the equation f ( x)  0. Let x be an increment in

x such that xk  x is the exact root, that is f ( xk  x)  0.

Expanding in Taylor’s series about the point x k . we get

(x) 2
f ( xk )  xf ' ( xk )  f ' ' ( xk )  ...  0.
2!
Neglecting the second and higher powers of  x , we obtain
f ( xk )
f ( xk )  xf ' ( xk )  0, or x   .
f ' ( xk )
Hence, we obtain the iteration method
f ( xk )
xk 1  xk  x  xk  . f ' ( xk )  0
f ' ( xk )
which is same as the method derived earlier.
Geometrically, the method consists in replacing the part of the curve between the point
( x0 , f ( x0 )), and the x  axis by means of the tangent to the curve at the point.

The method is shown graphically in the Fig.2.4


y
P ( x0 , f 0 )

Figure 2.4 Newton-Raphson Method


x1 x0 x

26
Remark:1. Convergence of the Newton’s method depends on the initial approximation to the
root. If the approximation is far away from the exact root, the method diverges.
However, if a root lies in a small interval ( a , b ) and x0  (a, b), then the method
converges.
2. The computational cost of the method is one evaluation of the function f (x ) and one
evaluation of the derivative f ' ( x), for each iteration.
Example 2.7: Perform four iterations of the Newton’s method to find the smallest positive root
of the equation f ( x)  x 3  5 x  1  0.
Solution: We have f (0)  1, f (1)  3.
Since , f (0) f (1)  0, the smallest positive root lies in the interval (0,1).

Let f ( x)  x 3  5 x  1  0 and f ' ( x)  3x 2  5


Now applying the Newton’s method, we obtain
xk3  5xk  1 2 xk3  1
xk 1  xk   2 k  0,1,2,...
3xk2  5 3 xk  5

Let x0  0.5 . We have the following results.

2 x03  1 2(0.5)3  1
x1  2   0.176471,
3x0  5 3(0.5) 2  5
2 x13  1 2(0.176471) 3  1
x2  2   0.201568,
3x1  5 3(0.176471) 2  5

2 x23  1 2(0.201568) 3  1
x3    0.201640,
3x22  5 3(0.201568) 2  5

2 x33  1 2(0.201640) 3  1
x4    0.201640
3x32  5 3(0.201640) 2  5
Therefore, the root correct to six decimal places is x  0.201640.

Example2.8: Derive the Newton’s method for finding the qth root of a positive number N ,
1 1
N , where N  0, q  0. Hence, compute 17 3 , correct to four decimal places,
q

assuming the initial approximation as x0  2.

27
1

Solution: Let x  N , or x  N . Define f ( x)  x q  N . Then , f ' ( x)  qx q 1.


q
q

Newton’s method gives the iteration


xkq  N qxkq  xkq  N (q  1) xkq  N
xk 1  xk   
qxkq1 qxkq1 qxkq1
1
For computing 17 3 , we have q  3 and N  17. Hence, the method becomes

2 xk3  17
xk 1  ,
3 xk2
With x0  2 , we obtain the following results.

2 x03  17 2(2) 3  17 k  0,1,2,...


x1    2.75,
3x02 3(2) 2

2 x13  17 2(2.75) 3  17
x2    2.582645,
3x12 3(2.75) 2

2 x23  17 2(2.582645) 3  17
x3    2.571332,
3x22 3(2.582645) 2

2 x43  17 2(2.571332) 3  17
x4    2.571282,
3x42 3(2.571332) 2

Now , x4  x3  | 2.571282 – 2.571332 | = 0.00005.

We may take x  2.571282 as the required root correct to four decimal places.
Exercise

1. Given the following equations :

i ) x 4  x 2  80  0 ii ) 2 xe2 x  sin x iii) cos x  x 2  x  0

determine the initial approximations for finding the smallest positive root. Use these to

find the root correct to three decimal places .Use Newton-Raphson method.

28
2. Using Newton-Raphson method solve with x0  10 root correct to four

decimal places.

3. Use Newton’s method to find solutions accurate to within 10−4 for the following problems.

i) x 3  2 x 2  5  0 , [1,4] iii) x 3  3 x 2  1  0 , [3,2]

ii ) x  cos x  0 , [0, 2 ] iv) x  0.8  0.2sin x  0 , [0, 2 ]

2.2.5. The secant Method:

We have seen that the Newton-Raphson Method requires the evaluation of derivatives of the
functions and this is not always possible, particularly in the case of functions arising in practical
problems. In the secant method, the derivative at x k is approximated by the formula

f ( xk )  f ( xk 1 )
f ( xk )  (2.17)
xk  xk 1
Hence Newton-Raphson Method formula (2.16) becomes
f ( xk )( xk  xk 1 )
xk 1  xk  , f ( xk )  f ( xk 1 )  0 , k  1,2,3,... (2.18)
f ( xk )  f ( xk 1 )

which is the secant method formula.


Remark: This method requires two initial guesses, but unlike the bisection method, the two
initial guesses do not need to bracket the root of the equation. The secant method is an
open method and may or may not converge. However, when secant method converges, it
will typically converge faster than the bisection method. However, since the derivative is
approximated as given by Eq. (2.17), it typically converges slower than the Newton-
Raphson method.
Example 2.9: A root of the equation f ( x)  x 3  5 x  1  0 lies in the interval (0,1). Perform
four iterations of the secant method to obtain this root.

Solution: We have x0  0, x1  1, f (0)  1, and f (1)  3,

Now applying the secant method, we obtain


f ( x1 )( x1  x0 )  3(1  0)
x2  x1   1  0.25, f ( x2 )  0.234375
f ( x1 )  f ( x0 )  3 1

29
f ( x2 )( x2  x1 )  0.234375(0.25  1)
x3  x2   0.25   0.186441, f ( x3 )  0.074276
f ( x2 )  f ( x1 )  0.234375 3
f ( x3 )( x3  x2 ) 0.074276(0.186441 0.25)
x4  x3   0.186441  0.201736, f ( x4 )  0.000470
f ( x3 )  f ( x2 ) 0.074276 0.234375

f ( x 4 )( x4  x3 )  0.000470(0.201736 0.186441)
x5  x 4   0.201736  0.201640.
f ( x 4 )  f ( x3 )  0.000470 0.074276

Example 2.10: Given f ( x)  x 4  x  10  0 .Determine the initial approximations for finding the
smallest positive root. Use these to find the root correct to three decimal
places using the secant method.
Solution: We have f ( x)  x 4  x  10  0,
we find that f (0)  10, and f (1)  10, f ( 2 )  4.

Hence, the smallest positive root lies in the interval (1,2).


The Secant method gives the iteration scheme

f ( xi )( xi  xi 1 )
xi 1  xi  i  1,2,3,...
f ( xi )  f ( xi 1 )

With x0  1, x1  2, we obtain the sequence of iterates

x2  1, 1.7143, x3  1, 1.8385, x4  1, 1.8578,

x5  1, 1.8556, x6  1, 1.8556.

The root correct to three decimal places is 1.856.

Exercise

1.Use secant method to obtain the smallest positive root , correct to three decimal places, of the
following questions
i) x 3  3x 2  3  0

ii ) x 3  x 2  x  7  0

iii) x  e  x  0

30
2. Use the secant method to find solutions, accurate to within 10−5 for the following problems.
i ) x 2  4 x  4  ln x  0 for 1  x  2
ii ) x 1  2sinx  0 for 0  x  0.5

2.2.6. Iteration Method:

The method is also called method of successive approximations or fixed point iteration method.

The first step in this method is to rewrite the given equation f ( x )  0 in an equivalent form as
x   (x ) (2.19)
There are many ways of rewriting f ( x )  0 in this form.

For example, f ( x)  x 3  5 x  1  0, can be rewritten in the following forms.

x3 1 1
5x 1
x , x  (5 x  1) 3 , x  , etc. (2.20)
5 x
Now, finding a root of f ( x )  0 is same as finding a number  such that    ( ), that is,
a fixed point of  (x). A fixed point of a function  is a point  such that    ( ).
Using Eq. (2.19), the iteration method is written as
xk 1   ( xk ), k  0,1,2,... (2.21)

The function  (x) is called the iteration function.

Starting with the initial approximation x0 , we compute the next approximations as

x1   ( x0 ), x2   ( x1 ), x3   ( x2 ), … (2.22)
The stopping criterion is same as used earlier.
Since, there are many ways of writing f ( x )  0 as x   (x), it is important to know whether all
or at least one of these iteration methods converges.
Convergence of an iteration method xk 1   ( xk ), k  0,1,2,... depends on the choice of the

iteration function  (x), and a suitable initial approximation x0 , to the root.


Consider again, the iteration methods given in Eq.(2.20), for finding a root of the equation
f ( x)  x 3  5 x  1  0. The positive root lies in the interval (0,1).

xk3  1
(i ) xk 1  , k  0,1,2,... (2.23)
5

31
With x0  1, we get the sequence of approximations as

x1  0.4, x2  0.2128, x3  0.20193, x4  0.20165, x5  0.20164.

The method converges and x  x5  0.20164 is taken as the required approximation to


the root.
1
(ii ) xk 1  (5 xk  1) 3
, k  0,1,2,... (2.24)

With x0  1, we get the sequence of approximations as

x1  1.5874, x2  1.9072, x3  2.0437, x4  2.0968,...


which does not converge to the root in (0,1).

5 xk  1
(iii) xk 1  k  0,1,2,... (2.25)
xk

With x0  1, we get the sequence of approximations as

x1  2.0, x2  2.1213, x3  2.1280, x4  2.1284,...


which does not converge to the root in (0,1).
Now, we derive the condition that the iteration function  (x) should satisfy in order that
the method converges.
Condition of convergence

The iteration method for finding a root of f ( x )  0 is written as

xk 1   ( xk ), k  0,1,2,... (2.26)

Let  be the exact root. That is,


   ( ). (2.27)
We define the error of approximation at the kth iterate as  k  xk   , k  0,1,2,...
Subtracting (2.27) from (2.26), we obtain
xk 1     ( xk )   ( )

 ( xk   ) ' (t k ) (using the mean value theorem) (2.28)

or  k 1   ' (t k ) k , xk  t k   .

Setting k  k  1, we get  k   ' (t k 1 ) k 1 , xk 1  t k 1   .

Hence,  k 1   ' (t k ) ' (t k 1 ) k 1 .

32
Using (2.28) recursively, we get
 k 1   ' (t k ) ' (t k 1 )... ' (t 0 ) 0 .
The initial error  0 is known and is a constant. We have

 k 1   ' (t k )  ' (t k 1 ) ...  ' (t0 )  0 .

Let  ' (t k )  c, k  0,1,2,...

Then,  k 1  c k 1  0 . (2.29)

For convergence, we require that  k 1  0 as k  . This result is possible, if and only

if c  1. Therefore, the iteration method (2.26) converges, if and only if

 ' ( xk )  c  1, k  0,1,2,...

or

| φ′(x) |≤ c < 1, for all x in the interval (a, b). (2.30)

We can test this condition using x0 , the initial approximation, before the computations
are done.
Let us now check whether the methods (2.23), (2.24), (2.25) converge to a root in (0, 1) of
the equation f ( x)  x 3  5 x  1  0.

x3 1 3x 2 3x 2
(i) We have  ( x)  ,  ' ( x)  ,  ' ( x)   1,
5 5 5
for all x in 0 < x <1. Hence, the method converges to a root in (0, 1).
1
5
(ii) We have  ( x)  (5 x  1) 3 ,  ' ( x)  2
, .
3(5 x  1) 3

Now  ' ( x)  1, when x is close to 1 and  ' ( x)  1, in the other part of the interval.

Convergence is not guaranteed.


5x  1 1
(iii) We have  ( x)  ,  ' ( x)  3 1
,
x
2 x (5 x  1)
2 2

Again  ' ( x)  1, when x is close to 1 and  ' ( x)  1, in the other part of the interval.
Convergence is not guaranteed.

33
Remark: Sometimes, it may not be possible to find a suitable iteration function  (x) by
manipulating the given function f (x ) . Then, we may use the following procedure.
Write f ( x )  0 as x  x  f ( x)   ( x) , where  is a constant to be determined.

Let x0 be an initial approximation contained in the interval in which the root lies.
For convergence, we require
 ' ( x0 )  1  f ' ( x)  1. (2.31)

Simplifying, we find the interval in which  lies. We choose a value for  from
this interval and compute the approximations. A judicious choice of a value in
this interval may give faster convergence.
Example2.13: Find the smallest positive root of the equation x 3  x  10  0 , using the general
iteration method.
Solution: We have f ( x)  x 3  x  10 , f(0) = – 10, f(1) = – 10,
f(2) = 8 – 2 – 10 = – 4, f(3) = 27 – 3 – 10 = 14.
Since, f(2) f(3) < 0, the smallest positive root lies in the interval (2 , 3).
1
Write x  x  10 , and x  ( x  10)   ( x) . We define the iteration method as
3 3

1
1
xk 1  ( xk  10) 3 . We obtain  ' ( x)  2 ,
3( x  10) 3

We find  ' ( x)  1, for all x in the interval (2, 3). Hence, the iteration converges.

Let x0 = 2.5. We obtain the following results.

x1 = (12.5)1/3 = 2.3208, x 2 = (12.3208)1/3 = 2.3097,


x3 = (12.3097)1/3 = 2.3090, x 4 = (12.3090)1/3 = 2.3089.

Since, | x 4 – x3 | = |2.3089 – 2.3090 | = 0.0001,

we take the required root as x ≈ 2.3089.


Example2.14: Find the smallest negative root in magnitude of the equation 3x 4  x 3  12x  4  0
, using the method of successive approximations.
Solution: We have
f ( x)  3x 4  x 3  12x  4  0 , f(0) = 4, f(– 1) = 3 – 1 – 12 + 4 = – 6.
Since, f(– 1) f(0) < 0, the smallest negative root in magnitude lies in the interval (– 1, 0).
34
Write the given equation as
4
x(3x 3  x 2  12)  4  0 , and x     ( x).
(3 x  x 2  12)
3

The iteration method is written as


4
xk 1  
(3xk  xk  12)
3 2

4(9 x 2  2 x)
We obtain  ' ( x) 
(3x 3  x 2  12) 2

We find  ' ( x)  1, for all x in the interval (– 1, 0). Hence, the iteration converges.

Let x0 = – 0.25. We obtain the following results.

x1

x2
x3

The required approximation to the root is x ≈ – 0.33333.


Exercise

1. In the following problems, find the smallest positive root as specified using fixed point
iteration method.
i ) x 2  5 x  1  0 , correct to four decimal places.

ii ) x 5  64 x  30  0 , correct to four decimal places.

iii) x  e  x , correct to two decimal places.

2. Find the smallest negative root in magnitude of 3 x 3  x  1  0 , correct to four decimal


places. Use fixed point iteration method.

2.3 Convergence of the Iteration Methods:


We now study the rate at which the iteration methods converge to the exact root, if the initial
approximation is sufficiently close to the desired root.
Define the error of approximation at the kth iterate as  k  xk   , k  0,1,2,...
Definition: An iterative method is said to be of order p or has the rate of convergence p , if p is
the largest positive real number for which there exists a finite constant c  0 , such that

35
 k 1  c  k .
p
(2.32)

The constant c , which is independent of k , is called the asymptotic error constant and it
depends on the derivatives of f (x ) at x   .
Let us now obtain the orders of the methods that were derived earlier.
Method of false position:
We have noted earlier that if the root lies initially in the interval ( x0 , x1 ) , then one of the end

points is fixed for all iterations. If the left end point x0 is fixed and the right end point moves
towards the required root, the method behaves like

x0 f k  xk f 0
xk 1 
fk  f0

Substituting xk   k   , xk 1   k 1   , x0   0   , we expand each term in Taylor’s

series and simplify using the fact that f ( )  0 . We obtain the error equation as
f ' ' ( )
 k 1  c 0 k , where c 
f ' ( )
Since  0 is finite and fixed, the error equation becomes

 k 1  c   k , where c  c 0 . (2.33)

Hence, the method of false position has order 1 or has linear rate of convergence.
Method of successive approximations or fixed point iteration method
We have xk 1   ( xk ), and    ( ),

Subtracting, we get xk 1     ( xk )   ( )   (  xk   )   ( )

 [ ( )  ( xk   ) ' ( )  ...]   ( )

or  k 1   k ' ( )  0( k2 ).

Therefore,  k 1  c  k , xk  t k   , and c   ' ( ) . (2.34)

Hence, the fixed point iteration method has order 1 or has linear rate of convergence.

Newton-Raphson method:
f ( xk )
The method is given by xk 1  xk  , f ' ( xk )  0
f ' ( xk )

36
Substituting xk   k   , xk 1   k 1   , we obtain

f ( k   )
 k 1     k   
f ' ( k   )

Expand the terms in Taylor’s series. Using the fact that f ( )  0 , and canceling f '' ( ) , we
1
[ k f ' ( )   k2 f '' ( )  ...]
obtain  k 1   k  2
f ' ( )   k f '' ( )
f '' ( ) 2 f '' ( )
 k 1   k  [ k    ...][1   k  ...] 1
2 f ( ) f ( )
' k '

f '' ( ) 2 f '' ( )
 k 1   k  [ k    ...][1   k  ...]
2 f ' ( ) f ' ( )
k

f '' ( ) 2 f '' ( ) 2
 k 1   k  [ k  '  k  ...]  '  k  ...
2 f ( ) f ( )

Neglecting the terms containing  k3 and higher powers of  k , we get

f ' ' ( )
 k 1  c k2 , where c  ,
f ' ( )
 k 1  c  k .
2
and (2.35)

Therefore, Newton’s method is of order 2 or has quadratic rate of convergence.

37
CHAPTER THREE

SOLVING SYSTEMS OF EQUATIONS

3.1. Direct Methods for Solving System of Linear Equations

In this chapter we consider exact methods for approximating the solution of a system of n linear
equations in n unknowns. An exact method is one that gives the exact solution to the system, if it
is assumed that all calculations can be performed without round-off error effects. This
assumption is idealized. We will need to consider quite carefully the role of finite-digit
arithmetic error in the approximation to the solution to the system and how to arrange the
calculations to minimize its effect.

Consider the following general system of n linear equations with n variables.

( ,

which can be written in the matrix form as

( ,( + ( +

which is the form of . where

( , ( + ( +

NB: A linear system of equations in variables, with coefficient matrix A and constant
vector , has a unique solution iff the determinant of .

3.1.1 Gaussian Elimination Method

If you have studied linear algebra or matrix theory, you probably have been introduced
to Gaussian elimination, the most elementary method for systematically determining the solution

38
of a system of linear equations. Variables are eliminated from the equations until one equation
involves only one variable, a second equation involves only that variable and one other, a third
has only these two and one additional, and so on. The solution is found by solving for the
variable in the single equation, using this to reduce the second equation to one that now contains
a single variable, and so on, until values for all the variables are found.

Three operations are permitted on a system of equations (

Operation on system of Equations

1. Equation can be multiplied by any nonzero constant λ, with the resulting


equation used in place of . This operation is denoted (λ ) → ( ).
2. Equations and can be transposed in order. This operation is denoted ( ) ↔ ( ).
3. Equation can be multiplied by any nonzero constant λ, and added to with the
resulting equation used in place of This operation is denoted (λ + ) → ( ).

By a sequence of the operations just given, a linear system can be transformed to a more easily
solved linear system with the same solutions. The sequence of operations is illustrated in the
next example.

EXAMPLE 1: The four equations

will be solved for , , , and . First use equation to eliminate the unknown
from , , and by performing ( −2 ) → ( ), ( −3 ) → ( ), and ( + )→( ).

The resulting system is

39
Where, for simplicity, the new equations are again labeled , , , and .

In the new system, is used to eliminate from and by

( −4 )→( ) and ( +3 )→( ), resulting in

E1 : + + 3 = 4,

E2 : − − − 5 = −7,

E3 : 3 + 13 = 13,

E4 : − 13 = −13.

The system of equations is now in triangular (or reduced) form and can be solved for the
unknowns by a backward-substitution process. Noting that E4 implies

= 1, we can solve for :

Continuing, gives

= − (−7 + 5 + ) = −(−7 + 5 + 0) = 2,

and gives

=4−3 − = 4 − 3 − 2 = −1.

The solution is, therefore, = −1, = 2, = 0, and = 1. It is easy to verify that these
values solve the original system of equations.

An matrix can be used to represent this linear system by first constructing

[ ] [ ] and [ ]

and then combining these matrices to form the augmented matrix:


40
[ ]

Repeating the operations involved in Example 1 with the matrix notation results in first
considering the augmented matrix:

which can be written in the augmented form as

[ | ]

Performing the operations as described in that example produces the matrices

[ | ] and [ | ]

The latter matrix can now be transformed into its corresponding linear system and solutions for
, , , and obtained. The procedure involved in this process is called Gaussian
Elimination with Backward Substitution.
The general Gaussian elimination procedure applied to the linear system

is handled in a similar manner. First form the augmented matrix A:

[ ]

41
Suppose that a11 0. To convert the entries in the first column, below a11, to zero, we perform
the operations ( − )→( ) for each for an appropriate multiplier mk1.
We first designate the diagonal element in the column, a11 as the pivot element. The multiplier
for the kth row is defined by = . Performing the operations

( − ) → ( ) for each, eliminates (that is, change to zero) the


coefficient of in each of these rows:

[ ] E2 − m21E1 → E2, E3 − m31E1 → E3, …, En − mn1E1 → En

which yields

[ ]

Although the entries in rows are expected to change, for simplicity of notation, we
again denote the entry in the ith row and the jth column by aij.

If the pivot element a22 0, we form the multipliers mk2 = ak2/a22 and perform the operations
( − )→ for each obtaining

[ ] E3 − m32E2 → E3… En − mn2E2 → En

which yields

[ ]

Since the new linear system is triangular, backward substitution can be performed. Solving the
nth equation for xn gives

Solving the (n − 1)st equation for xn−1 and using the known value for xn yields

42
and Continuing this process

Note:
Consider the system
Let us denote the original system by

* +[ ] [ ]

Step 1: Assume . Then define the row multipliers:

Then eliminate from the last equations by subtracting the multiple of the first
equation from the ith equation.
As a result, the first row of and are left unchanged & the remaining rows are
changed.
And so, we get a new system, denoted by:

* +[ ]
[ ]
Where the new coefficients are given by

i, j= 2, 3, 4, . . ., n

Step 2: If , we can eliminate from the last (n-2) eqns by generating the multiplier:

= , i = 3, 4, 5,…

Thus we get a new system denoted by

The coefficients are obtained by using:

43
i, j= 3, 4, . . ., n
We continue to eliminate the unknowns, going onto columns, 3, 4, & so on & this is
expressed generally as:
Step K:
Let assume that eliminated at successive stage, & has
the form

[ ]
Assume that multipliers

And use those to remove from through n.

The earlier rows 1 through k are left undistributed and zeros are introduced into column k
below the diagonal element.
By continuing this manner, after n-1 steps we obtain

* +[ ]
[ ]
Then, using backward substitution formula

We solve for x’s


Example 2: Solve the linear system using Gaussian elimination

44
Solution
The augmented matrix

[ ] ( +

Step 1 , and then generate the multipliers

= and

( +

Step 2 , and then generate the multipliers


( ) ⁄

Exercises
1. Solve the linear system using Gaussian elimination

45
3.1.2. Gaussian Elimination with partial pivoting

Now you’re making progress! The general rule to follow is: at each elimination stage,
arrange the rows of the augmented matrix so that the new pivot element is larger in
absolute value than element beneath it in its column.
Example 3:
Solve the following system using Gaussian elimination with partial pivoting.

Solution
Consider the augmented matrix
[ ]
( +

Interchanging the 2nd and the 1st rows because to get a multiplier of small
magnitude

( +

Eliminate from the 2nd and 3rd rows


Apply the row operations for to
the above augmented matrix. And hence the modified augment matrix is given by

( +

Apply the row operations for to the above augmented


matrix.
Hence we have

( +

46
Thus, by back-substitution Hence the required
numerical solution is given by .
Exercise
Solve the following system using Gaussian elimination with partial pivoting

3.1.3. Gauss-Jordan Method

The Gauss-Jordan method consists of transforming the linear system into equivalent
system

, where I is the identity matrix of order n, so that


Description of the method
Consider

( , ( , ( ,

Step1
Assume that and make
by applying elementary operation, i.e.

Now, make the non-diagonal elements of the first column zero by applying
, +1
Then the new system is:

( , ( , ( ,

Step 2
Assume that and make
by appling elementry operation, i.e

47
Now, make the non-diagonal elements of the 2nd column zero by applying
, where
, +1
Continue the process until the system takes the form:

( , ( , ( ,

This implies that

Example 4: Solve the following system using Gauss Jordan Method

Solution

( +

⁄ ⁄
( )

⁄ ⁄
( , ⁄ ⁄
⁄ ⁄

, and so make , by

( +

Make all the non- diagonal entries of column zero

48
( +

Therefore , , and

Example 5: Use the Gauss-Jordan elimination method to solve the linear system

 1 2 3  x1   3 
    
  3 1 5  x2     2 
 2 4  1 x    1 
  3   

First form the augmented matrix [ ]

 1 2 3 3 
 
M    3 1 5  2
 2 4 1 1
 

Then perform Gauss-Jordan elimination.

 1 2 3 3  1 2 3 3  1 0 1 1  1 0 0 2
       
M    3 1 5  2  =  0 7 14 7  =  0 1 2 1  0 1 0 1
 2 4 1 1  0 0  7  7   0 0  7  7  0 0 1 1
     =  

 x1   2 
   
Hence, the solution is  X    x2     1
 x  1 
 3  
Exercises
1. Solve the following system using Gauss-Jordan Method

2. Solve the following system using Gauss-Jordan Method

49
3.1.4. Matrix Inversion Using Jordan Elimination

Let A be an matrix such that

To find the inverse of A use the following transformation

to where B is the inverse of A.

Example 6: Find the inverse of A by using Jordan Elimination

( +

Solution:

( +

( + so make by

( +

( +

( +

Therefore ( +

50
3.1.5. LU Matrix Decomposition

Procedure:

Let be a given system of linear equation

Step 1: write as

LUX=B L(UX)=B

Step 2: set Y=UX so that LY=B

Step 3: solve the equation LY=B for Y.

Step 4: solve Y=UX for X. then X is the solution of the system.

Remark: Make sure A is non-Singular matrix.

Example7: Solve the following system using LU decomposition.

Solution:

( +

Check:

Det A = so A is non-singular.

Now factorize A as: A=LU

( + ( +( +

( + ( +

51
Similarly, ⁄

( + ( +( +

( + ( +( ⁄ )

Now and Set , where Y is a column vector

( +( + ( + ( + ( +

( + ( ⁄ )

Finally solve for X the equation UX = Y

( ⁄ )( + ( ⁄ )

Exercise

Solve the following by LU decomposition


1. Tri-Diagonal Matrix
1. { {

52
3.1.6. Tri Diagonal Matrix Method

Definition: Let A = ( be a square matrix of order n such that whenever| | ,


then A is called a tri diagonal matrix. That is

( )

Exercise: Solve using LU decomposition method

( ,( , ( ,

Hint: A=LU for tri diagonal matrix has the form:

( , ( ,( ,

3.2. Indirect Methods for solving system of Linear Equations

An iterative method is a mathematical procedure that generates a sequence of improving


approximate solutions for a class of problems. A specific implementation of an iterative method,
including the termination criteria is the algorithm of the iterative method. An iterative method is
said to be convergent if the corresponding sequence converges for some given initial
approximations.

Consider the system of linear equations of the form

(3.1)

which can also be written as

53
} (3.2)

where are constant coefficients and are given real constants in the
system of linear algebraic equations in unknown variables .

In this section we shall consider two iterative methods for solving system of linear equations
namely; the Jacobi method and the Gauss Seidel method.

3.3.1 The Jacobi Iterative Method

In numerical linear algebra, the Jacobi method is algorithm for determining the solution of
system of linear equation with largest absolute value in each row and column dominated by the
diagonal element. Each diagonal element is solved for, and approximate values are substituted,
the process is then iterated until it converges.

Let us consider the general system of linear equations given by (3.1) which can be written in the
form:

The following are the assumptions of the Jacobi method:

 The system given by (3.3) has a unique solution (consistent and independent).
 The coefficient matrix A has no zeros on its main diagonal.

If any of the diagonal entries is zero then rows or columns must be


interchanged to obtain a coefficient matrix that has non zero entries on the main diagonal.

In Jacobi method each equation of the system is solved for the component of the solution vector
associated with the diagonal element that is as follows:

54
Equation (3.4) also can be written as

( ∑ )

Setting the initial estimates ,

Then equation (3.4) becomes

( )

( )

( )
}

Equation (3.5) is called the first approximation. In the same way the second
approximation is obtained by substituting the first approximation (x- values) in to
the right hand side of (3.5) and the we have

( )

( )

( )
}

By repeated iterations, a sequence of approximations that often converges to the actual solution
is formed.

In general, the Jacobi iterative method is given by

( ∑ )

55
The summation is taken over all j’s except The initial estimate or vector can be chosen
to be zero unless we have a better a-prior estimate for the true solution. This procedure is
repeated until some convergence criteria is satisfied.

Remark: The Jacobi method is said to be convergent if the matrix A is strictly diagonally
dominant i.e.

| | ∑| |

Example: Use the Jacobi method to approximate the solution of the following system of linear
equations.

Continue the iterations until two successive approximations are identical when rounded to three
significant digits.

Solution: To begin write the system in the form

Since we don’t know the actual solution we choose the initial approximations as

So the first approximation is

56
Continuing this procedure, you obtain a sequence of approximations as shown in the following
table

n 0 1 2 3 4 5 6 7
0.000 -0.200 0.146 0.192 0.181 0.185 0.186 0.186
0.000 0.222 0.203 0.328 0.332 0.329 0.331 0.331
0.000 -0.429 -0.517 0.416 -0.421 -0.424 -0.423 -0.423

Because the last two columns are identical you can conclude that to thee significant digits the
solution is

Exercise: Solve the following system of linear equations by using Jacobi iterative method
correct to three decimal points.

a) b)

3.3.2. The Gauss Seidel Iterative Method

In numerical linear algebra the Gauss Seidel Method, is an iterative method used for solving a
system of linear equations. It was named used for named after the German Mathematicians Carl
Friedrich Gauss (1777-1855) and Philip Ludwig Von Seidel (1821-1896). This method can be
applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if
the matrix is either diagonally dominant, or symmetric and positive definite.

The Gauss Seidel method for solving a set of linear equations can be thought of as just an
extension if the Jacobi method. Start out using an initial value of zero for each of the parameters.
Then solve for as in the Jacobi method. When solving for , insert the just computed
value for . In other words for each calculation the most current estimate of the parameter is
used.

In the Gauss Seidel method each equation of the system is solved for the component of the
solution vector associated with the diagonal element that is , as follows:

57
}

Equation (3.7) also can be written as

( ∑ )

Setting the initial estimates ,

Then equation (3.7) becomes

( )

( )

( )
}

Equation (3.8) is called the first approximation. In the same way the second approximation is
obtained by substituting the first approximation (x- values) in to the right hand side of (3.8) and
the we have

58
( )

( )

( )
}

By repeated iterations, a sequence of approximations that often converges to the actual solution
is formed.

In general, the Gauss Seidel iterative method is given by

( ∑ ∑ )

The summation is taken over all j’s except The initial estimate or vector can be chosen
to be zero unless we have a better a-prior estimate for the true solution. This procedure is
repeated until some convergence criteria is satisfied. The Gauss Seidel method is generally
converges faster than the Jacobi method.

Remark: The Gauss Seidel method is said to be convergent if the matrix A is strictly diagonally
dominant i.e.

| | ∑| |

Example: Use the Gauss Seidel method to approximate the solution of the following system of
linear equations.

Continue the iterations until two successive approximations are identical when rounded to three
significant digits.

Solution: To begin write the system in the form

59
Since we don’t know the actual solution we choose the initial approximations as

So the first approximation for is

Now that you have a new value for however use it to compute a new value for . That is,

Similarly, use and to compute a new value for . That is

So the first approximation is , and .

Continuing this procedure, you obtain a sequence of approximations as shown in the following
table

n 0 1 2 3 4 5
0.000 -0.200 0.167 0.191 0.186 0.186
0.000 0.156 0.334 0.333 0.331 0.331
0.000 -0.508 -0.429 -0.422 -0.423 -0.423

Not that after only five iterations of the Gauss Seidel method, you achieve the same accuracy as
was obtained with seven iterations of the Jacobi method in the previous example.

Exercise: Solve the following system of linear equations by using Gauss Seidel iterative method
correct to three decimal points.

a) b)

60
UNITE FOUR

FINITE DIFFERENCES

4.1. Forward Difference Operator

Let be any function given by the values which it takes for the
equidistant values , of the independent variable x, then
are called the first differences of the function y.
They are denoted by ∆ , , … etc.
We have

...

The symbol is called the difference operator. The differences of the first differences denoted
by are called second differences, where

is called the second difference operator.

...

61
4.1.1. Difference Table
It is a convenient method for displaying the successive differences of a function. The following
table is an example to show how the differences are formed

X Y

The above table is called a diagonal difference table. The first term in the table is . It is called
the leading term.

The differences, ∆ , ,., are called the leading differences. The differences with
a fixed subscript are called forward differences. In forming such a difference table care must be
taken to maintain correct sign.
62
Example: Construct a forward difference table for the following data

X 0 10 20 30

Y 0 0.174 0.347 0.518

Solution

X Y

Exercise

1. Construct a difference table for y = f(x) = x3 + 2x + 1 for x = 1, 2, 3, 4, 5


2. By constructing a difference table and taking the second order differences as constant find
the sixth term of the series 8, 12, 19, 29, 42.

63
4.1.2. The Shift Operator E

Let be function of x and , etc., be the consecutive values


of x, then the operator is defined as

E is called shift operator. It is also called displacement operator.

means the operator E is applied twice on f (x), i.e.,

[ ]

Similarly and

The operator E has the following properties:

1. E(f1 (x) + f2 (x) + … + fn (x)) = Ef 1(x) + Ef2 (x) + … + Efn (x)

2. E(cf(x)) = cEf(x) (where c is constant)

3. E m (En f (x)) = En (Em f (x)) = En+m f (x) where m, n are positive integers

4. If n is positive integer En[E-n f (x)] = f (x)

Alternative notation: If y0 , y1 , y2 , …, yn are consecutive values of the function


corresponding to equally spaced values x0 , x1 , x2 , …, xn of x then in alternative notation

Ey0 = y 1 , Ey1 =y2, …, Eny0 =yn

Relation between the Operator E and ∆

From the definition of ∆, we know that

where h is the interval of differencing. Using the operator E we can write

64
The above relation can be expressed as an identity

Example: Show that

Proof

Example: Evaluate ( ) .

Solution:

Let h be the interval of differencing

( )

= = x3

= = (x + h)3 – 2x3 + (x – h)3

= 6xh.

65
4.2. Backward Differences

Let y = f(x) be a function given by the values y0 , y1, … yn which it takes for the equally spaced

Values x0 , x1 , …, xn of the independent variable x. Then y1– y0 , y2– y1, …, yn – yn-1 are called
the first backward differences of They are denoted by ∇y0 , ∇y1 ,..., ∇yn ,
respectively. Thus we have

y1− y0 = ∇y1, y2 − y1 = ∇y2, …, yn − yn − 1 = ∇yn ,

where ∇ is called the backward difference operator.

X Y

Note: In the above table the differences ∇ny with a fixed subscript i, lie along the diagonal
upward sloping.

Alternative notation: Let the function y = f(x) be given at equal spaces of the independent
variable x at , … then we define

Where ∇ is called the backward difference operator, h is called the interval of differencing.

66
In general we can define

We observe that

Similarly we get

∇2f(x + 2h) = ∇( ∇f(x + 2h)

= ∇ (∆f(x +h)

= ∆ (∆ f(x)

….

f(x + nh) =

Relation between E and ∇ :

∇f(x) = f( x) – f(x - h) = f (x ) − E −1 f (x )

∇ = 1 − E −1

or ∇= .

Example : Prove the following

(a) (1 + ∆)(1 − ∆)= 1

(b) ∆∇ = ∆ − ∇

(c) ∇ = E −1∆.
67
Solution

(a) (1+ ∆) (1− ∇) f(x) = E( f( x))

= Ef (x – h) = f(x) = 1.f (x)

(b) ∇∆ f(x) = (E - 1) (1- E −1) f x

= (E – 1)[ f (x) − f(x – h)]

= Ef( x) – f( x) – Ef( x – h) + f (x – h)

= f(x + h) – f(x) – f( x) + f (x – h)

= [(E – 1) – (1− E −1)] f x

= (∆ − ∇) f x

∆∇f(x) = (∆ − ∇) f(x)

∆∇ = ∆ − ∇.

(c) ∇ f(x) = (1 - E −1) f(x) = f (x) – f(x – h)

and E −1 ∆f (x) = E −1 [f (x + h) − f( x)

= f (x) − f (x –h) ∇

∇ = E −1 ∆

1.1.Central Deference

The central deference operator is defined by the relation

, ,...

Similarly, higher-order central differences can be defined. With the values of x and y as in the
preceding two tables a central difference table can be formed.

68
X y

It is clear from the three tables that in definite numerical case the same number occurred in the
same positions whether we use forward, backward or central differences.

Thus we obtain

Exercise

1. Find the forward difference table corresponding to the data points (1, 3), (2, 5), (3, 7), and
(4, 10).
2. Find the forward table corresponding to the data points (1, 3), (2, 5), (3, 8), and (4, 10).
3. Find the backward difference table corresponding to the data points (1, 3), (3, 5), (5, 7),
and (7, 10).
4. Find the backward difference table corresponding to the data points (1, 3), (3, 5), (5, 8),
and (7, 10).

69
UNIT FIVE

5. INTERPOLATION
5.1. Introduction

In this chapter, we discuss the problem of approximating a given function by polynomials. There
are two main uses of these approximating polynomials. The first use is to reconstruct the
function f (x ) when it is not given explicitly and only values of f (x ) are given at a set of distinct
points called nodes or tabular points. The second use is to perform the required operations which
were intended for f (x ) , like determination of roots, differentiation and integration etc. can be
carried out using the approximating polynomial p (x) . The approximating polynomial p (x) can
be used to predict the value of f (x ) at a non-tabular point. The deviation of p (x) from f (x ) ,
that is f (x ) – p (x) , is called the error of approximation.
Let f (x ) be a continuous function defined on some interval [a, b], and be prescribed at
n  1 distinct tabular points x 0 , x , x 2 , . . . , such that a  x0  x1  x2  ...  xn  b . The

distinct tabular points x 0 , x1 , x 2 , . . . , x n may be non-equispaced or equispaced, that is

xi  xi 1  h , for i  1,2,3,..., n. The problem of polynomial approximation is to find a

polynomial pn (x) , of degree ≤ n, which fits the given data exactly, that is,

p n ( xi ) = f n ( xi ) , i  0,1,2,..., n. (5.1)

The polynomial pn (x) is called the interpolating polynomial. The conditions given in
(5.1) are called the interpolating conditions. The interpolation polynomial fitting a given data is
unique. We may express it in various forms but are otherwise the same polynomial.

5.2. Interpolation with Evenly Spaced Points

5.2.1 Newton’s Forward Difference Interpolation Formula

Given the set of (n  1) values, ( x0 , y0 ), ( x1 , y1 ), ( x2 , y2 ), . . . , ( xn , y n ) of x and y , it is required

to find y n (x) , a polynomial of nth degree such that y and y n (x) agree at the tabulated points.

70
Let the values of x be equidistant, that is,

xi  xi 1  h , for i  1,2,3,..., n.

Therefore , x1  x0  h , x2  x0  2h , etc.

xi  x0  ih , for i  1,2,3,..., n.

Since y n (x) is a polynomial of nth degree, it may be written as

yn ( x)  a0  a1 ( x  x0 )  a2 ( x  x0 )( x  x1 )  a3 ( x  x0 )( x  x1 )( x  x2 )  ...

 an ( x  x0 )( x  x1 )( x  x2 )...( x  xn1 ). ( 5.2)

The (n  1) unknowns a 0 , a1 , a 2 , . . . , a n can be found as follows

Put x  x0 in (5.2) we obtain ,

y n ( x0 )  y 0  a 0 i.e a0  y0 (since the other terms in (5.2) vanish)

again put x  x1 in ( 5.2) we obtain ,

y n ( x1 )  y1  a0  a1 ( x1  x0 )  y0  a1 ( x1  x0 ) (since the other terms in ( 5.2) vanish)

y1  y0  a1 ( x1  x0 ) then solve for a1 we obtain ,

y1  y0 y0 y0
a1   i.e a1 
x1  x0 h h

Similarly put x  xi , for i  2,3,..., n  1 , in ( 5.2) we obtain

2 y 0 3 y0 n y0
a2  a3  an 
2!h 2 , 3!h 3 , . . ., n! h n

now substitute a 0 , a1 , a 2 , . . . , a n in ( 5.2)

y0 2 y 0 3 y0 n y0
by a0  y0 a1  a2  a 3  a n  we obtain ,
, h , 2!h 2 , 3!h 3 , . . ., n! h n

71
y0 2 y0 3 y0
y n ( x)  y0  ( x  x0 )  ( x  x 0 )( x  x1 )  ( x  x0 )( x  x1 )( x  x2 ) 
h 2! h 2 3! h 3

n y0
...  ( x  x0 )( x  x1 )( x  x2 )...( x  xn1 ). ( 5.3)
n! h n

Put x  x0  ph in ( 5.3), we get

p ( p  1) 2 p ( p  1)( p  2) 3
y n ( x)  y0  py0   y0   y0  ...
2! 3!

p ( p  1)( p  2)...( p  n  1) n
  y0 (5.4)
n!

This is Newton’s forward difference interpolation formula and useful for interpolating near the
beginning of a set of tabular values.
5.2.2 Newton’s Backward Difference Interpolation Formula
Given the set of (n  1) values, viz., ( x0 , y0 ), ( x1 , y1 ), ( x2 , y2 ), . . . , ( xn , y n ) of x and y , it is

required to find y n (x) , a polynomial of nth degree such that y and y n (x) agree at the tabulated

points. Let the values of x be equidistant, that is,

xi  xi 1  h , for i  1,2,3,..., n.

Therefore , x1  x0  h , x2  x0  2h , etc.

xi  x0  ih , for i  1,2,3,..., n.

Instead of assuming y n (x) as in ( 5.2), if we choose it in the form

y n ( x)  a0  a1 ( x  xn )  a2 ( x  xn )( x  xn1 )  a3 ( x  xn )( x  xn1 )( x  xn2 )  ...

 an ( x  xn )( x  xn1 )( x  xn2 )...( x  x1 ). ( 5.5)

then the (n  1) unknowns a 0 , a1 , a 2 , . . . , a n can be found as follows

Put x  xn in ( 5.5) we obtain ,

72
y n ( x n )  y n  a0 i.e a0  yn (since the other terms in ( 5.5) vanish)

again put x  xn1 in ( 5.5) we obtain ,

yn ( xn1 )  yn1  a0  a1 ( xn1  xn )  yn  a1 ( xn1  xn )

y n1  y n  a1 ( xn1  xn ) then solve for a1 we obtain ,

yn  yn1 yn y h
a1   i.e a1 
xn  xn1 h h

Similarly put x  xni , for i  2,3,..., n  1 , in ( 5.5) we obtain

 2 yn  3 yn  n yn
a2  a  a n 
2! h 2 ,
3
3!h 3 , . . ., n!h n

now substitute a 0 , a1 , a 2 , . . . , a n in ( 5.5)

y h  2 yn  3 yn  n yn
by a0  yn a1  a2  a3  an  we obtain ,
, h 2! h 2 , 3!h 3 ,. . . n!h n

y n  2 yn  3 yn
y n ( x)  y n  ( x  xn )  ( x  x n )( x  x n 1 )  ( x  xn )( x  xn1 )( x  xn2 ) 
h 2!h 2 3!h 3

 n yn
...  ( x  xn )( x  xn1 )( x  xn2 )...( x  x1 ). (5.6)
n!h n

Put x  xn  ph in (5.6), we get

p ( p  1) 2 p ( p  1)( p  2) 3
y n ( x )  y n  p y n   yn   y n  ...
2! 3!

p ( p  1)( p  2)...( p  n  1) n
  yn . (5.7)
n!

73
This is Newton’s backward difference interpolation formula and useful for interpolating near the
end of the tabular values.

Example 5.1: Using Newton’s forward difference interpolation formula, find the form of the
function y (x ) from the following table
x 0 1 2 3
f (x ) 1 2 1 10
Solution: We have the following forward difference table for the data.
The forward differences can be written in a tabular form as in Table 5.1.

x f (x )  2 3

0 1

1 2 -2

-1 12

2 1 10

3 10

Table 5.1. forward differences.


Since n  3 ,the cubic Newton’s forward difference interpolation polynomial becomes:
p( p  1) 2 p ( p  1)( p  2) 3
y 3 ( x )  y 0  p y 0   y0   y0 .
2! 3!
x  x0 x  0
where p  x
h 1
x( x  1) x( x  1)( x  2)
y3 ( x)  1  x(1)  (2)  (12).
2! 3!

74
y3 ( x)  1  x  ( x 2  x)  2( x)(x  1)(x  2).

y3 ( x)  1  x  ( x 2  x)  2( x)(x 2  3x  2).

y3 ( x)  2 x3  7 x 2  6 x  1.

Example 5.2: Find the interpolating polynomial corresponding to the data (1,5),(2,9),(3,14),and
(4,21). Using Newton’s backward difference interpolation polynomial.
Solution: We have the following backward difference table for the data.
The backward differences can be written in a tabular form as in Table 5.2.
x f (x )  2 3

1 5

2 9 1

5 1

3 14 2

4 21

Table 5.2. backward differences.


Since n  3 ,the cubic Newton’s backward difference interpolation polynomial becomes:
p ( p  1) 2 p ( p  1)( p  2) 3
y3 ( x)  y n  py n   yn   yn .
2! 3!
x  x3 x  4
where p   x  4.
h 1
( x  4)( x  4  1) ( x  4)( x  4  1)( x  4  2)
y3 ( x)  21  ( x  4)(7)  ( 2)  (1).
2 6
( x  4)( x  3) ( x  4)( x  3)( x  2)
y3 ( x)  21  ( x  4)(7)  ( 2)  (1).
2 6

75
x 3 x 2 26x
y3 ( x )     1.
6 2 6
Example 5.3: The table below gives the value of tan x for 0.10  x  0.30 :
x 0.10 0.15 0.20 0.25 0.30
f (x ) 0.1003 0.1511 0.2027 0.2553 0.3093
Find:
i ) tan 0.12 ii ) tan 0.26
Solution: We have the following forward difference table for the data.
The forward differences can be written in a tabular form as in Table 5.3.

x f (x )  2 3 4

0.10 0.1003

0.0508

0.15 0.1511 0.0008

0.0516 0.0002

0.20 0.2027 0.0010 0.0002

0.0526 0.0004

0.25 0.2553 0.0014

0.0540

0.30 0.3093

76
Table 5.3 forward differences.
i) To find tan 0.12 we use Newton’s forward difference interpolation polynomial.
x  x0 0.12  0.10
we have x  0.12 , h  xi 1  xi  0.05 and p    0.4.
h 0.05
Hence formula (5.4) gives
p ( p  1) 2 p ( p  1)( p  2) 3 p ( p  1)( p  2)( p  3) 4
y 4 ( x)  y0  py0   y0   y0   y0 .
2! 3! 4!
0.4(0.4  1) 0.4(0.4  1)(0.4  2)
y4 (0.12)  tan 0.12  0.1003 0.4(0.0508)  (0.0008)  (0.0002) 
2 6
0.4(0.4  1)(0.4  2)(0.4  3)
(0.0002).
24
=0.1205.

ii) To find tan 0.26 we use Newton’s back ward difference interpolation polynomial.
x  x4 0.26  0.30
we have x  0.26 , h  xi 1  xi  0.05 and p    0.8.
h 0.05
Hence formula (5.7) gives
p ( p  1) 2 p ( p  1)( p  2) 3 p ( p  1)( p  2)( p  3) 4
y 4 ( x)  y 4  py 4   y4   y4   y4 .
2! 3! 4!
 0.8(0.8  1)
y 4 (0.26)  tan 0.26  0.3093 0.8(0.0540)  (0.0014) 
2
 0.8(0.8  1)(0.8  2)  0.8(0.8  1)(0.8  2)(0.8  3)
(0.0004)  (0.0002).
6 24

=0.2662
Example 5.4: Using Newton’s forward difference formula, find the sum
S n  13  23  33  ...  n3 .
Solution:

We have S n1  13  23  33  ...  n3  (n  1)3 .

Hence Sn1  Sn  (n  1)3 .

or S n  (n  1)3 .

It follows that 2 Sn  Sn1  Sn  (n  2)3  (n  1)3  3n 2  9n  7.

77
3 S n  3(n  1) 2  9(n  1)  7  (3n 2  9n  7)  6n  12.

4 Sn  6(n  1)  12  (6n  12)  6.

Since 5 Sn  6 Sn  ...  0, S n is a fourth degree polynomial in n.

Further, S1  1, S1  8, 2 S1  19, 3 S1  18, 4 S1  6.

Hence formula (5.4) gives


(n  1)(n  2) (n  1)(n  2)(n  3)
S n  1  (n  1)(8)  (19)  (18) 
2 6

(n  1)(n  2)(n  3)(n  4)


 (6).
24
1 1 1
 n 4  n3  n 2
4 2 4

 n(n  1) 
2

 .
 2 
Exercise:

1. Find f (x ) as a polynomial in x for the following data by Newton’s forward difference

Formula

x 3 4 5 6 7 8 9
f (x ) 13 21 31 43 57 73 91
Hence, interpolate at x = 3.5.

3. Given

x 0.20 0.22 0.24 0.26 0.28 0.3


f (x ) 1.6596 1.6698 1.6804 1.6912 1.7024 1.7139
Using Newton’s difference interpolation formula, find f (0.23) and f (0.29) .

78
5.3 Interpolation with Unevenly Spaced Points

5.3.1 Lagrange Interpolation

Let the data


x x0 x1 x2 … xn

f (x ) f ( x0 ) f ( x1 ) f ( x2 ) … f ( xn )

be given at distinct unevenly spaced points or non-uniform points x 0 , x1 , x 2 , . . . , x n . This data


may also be given at evenly spaced points. For this data, we can fit a unique polynomial of
degree ≤ n. Since the interpolating Polynomial must use all the ordinates f ( x0 ), f ( x1 ),... , f ( xn ).
it can be written as a linear combination of these ordinates. That is,
we can write the polynomial as
pn (x ) =  0 ( x) f ( x0 ) +  1 ( x) f ( x1 ) + ... +  n (x) f ( xn )

=  0 ( x) f 0 +  1 ( x) f1 + ... +  n (x ) f n
n
p n ( x)    i ( x) f i . (5.8)
i 0

where f ( xi ) = f i and  i (x) ), i  0,1,2,..., n. are polynomials of degree n. This polynomial fits the
data given in (5.1) exactly.
At x = x0 , we get

f ( x0 )  pn ( x0 ) =  0 ( x0 ) f ( x0 ) +  1 ( x0 ) f ( x1 ) + ... +  n ( x0 ) f ( xn )

This equation is satisfied only when  0 ( x0 )  1 and  i ( x0 )  0 , i  0 .

At a general point x = xi , we get

f ( xi )  pn ( xi ) =  0 ( xi ) f ( x0 ) +  1 ( xi ) f ( x1 ) + ... +  i ( xi ) f ( xi ) + ...  n ( xi ) f ( xn )

This equation is satisfied only when  i ( xi )  1 and  j ( xi )  0 , i  j .

Therefore,  i (x) , which are polynomials of degree n, satisfy the conditions

 i ( x j ) ={ (5.9)

79
Since,  i (x) = 0 at x  x0 , x1 , x 2 , . . . , xi 1 , xi 1 , . . . , x n , we know that

( x  x0 ) , ( x  x1 ) , ( x  x2 ) , . . . , ( x  xi 1 ) , ( x  xi 1 ) , . . . , ( x  xn ).

are factors of  i (x). The product of these factors is a polynomial of degree n.


Therefore, we can write
 i (x)  c( x  x0 ) ( x  x1 ) ( x  x2 )... ( x  xi 1 ) ( x  xi 1 )... ( x  xn ).

where c is a constant.
Now, since  i ( xi ) = 1, we get

 i ( xi ) = 1 = c( xi  x0 ) ( xi  x1 ) ( xi  x2 )... ( xi  xi 1 ) ( xi  xi 1 )... ( xi  xn ).

1
Hence, C
( xi  x0 )( xi  x1 )...( xi  xi 1 )( xi  xi 1 )...( xi  xn )
( x  x0 )( x  x1 )...( x  xi 1 )( x  xi 1 )...( x  xn )
Therefore,  i ( x) 
( xi  x0 )( xi  x1 )...( xi  xi 1 )( xi  xi 1 )...( xi  xn ) (5.10)
Note that the denominator on the right hand side of  i (x) is obtained by setting x = xi in

the numerator. The polynomial given in (5.15) where  i (x) are defined by (5.17) , i.e,
n
p n ( x)    i ( x) f i (5.11)
i 0

is called the Lagrange interpolating polynomial and  i (x) are called the Lagrange fundamental
polynomials.
We can write the Lagrange fundamental polynomials  i (x) in a simple notation.

Denote w(x)  ( x  x0 ) ( x  x1 ) ( x  x2 )... ( x  xn ).

which is the product of all factors. Differentiating w(x) with respect to x and substituting x = xi
we get
w' ( xi )  ( xi  x0 ) ( xi  x1 ) ( xi  x2 )... ( xi  xi 1 ) ( xi  xi 1 )... ( xi  xn ).

since all other terms vanish. Therefore, we can also write  i (x) as

w( x)
 i ( x) 
( x  xi ) w' ( xi )
So that (5.18) becomes

80
n
w( x)
p n ( x)   fi . (5.12)
i 0 ( x  xi ) w' ( xi )

Let us derive the linear and quadratic interpolating polynomials.


Linear Interpolation
For n = 1, we have the data

x f (x )

x0 f ( x0 )

x1 f ( x1 )

The Lagrange fundamental polynomials are given by


( x  x1 ) ( x  x0 )
 0 ( x)  ,  1 ( x) 
( x0  x1 ) ( x1  x0 )
The Lagrange linear interpolation polynomial is given by
1
p1 ( x)    i ( x) f i
i 0

p1 ( x) =  0 ( x) f ( x0 ) +  1 ( x) f ( x1 ) .

( x  x1 ) ( x  x0 )
= f ( x0 ) + f ( x1 ) .
( x0  x1 ) ( x1  x0 )

Quadratic Interpolation
For n = 2, we have the data
x f (x )

x0 f ( x0 )

x1 f ( x1 )

x2 f ( x2 )

81
The Lagrange fundamental polynomials are given by

( x  x1 )( x  x2 ) ( x  xo )( x  x2 ) ( x  xo )( x  x1 )
 0 ( x)  ,  1 ( x)  ,  2 ( x) 
( xo  x1 )( xo  x2 ) ( x1  xo )( x1  x2 ) ( x2  xo )( x2  x1 )

The Lagrange quadratic interpolation polynomial is given by


2
p2 ( x )    i ( x ) f i
i 0

p2 ( x) =  0 ( x) f ( x0 ) +  1 ( x) f ( x1 ) +  2 ( x) f ( x2 ) .

( x  x1 )( x  x2 ) ( x  xo )( x  x2 ) ( x  xo )( x  x1 )
= f ( x0 ) + f ( x1 ) + f ( x2 ).
( xo  x1 )( xo  x2 ) ( x1  xo )( x1  x2 ) ( x2  xo )( x2  x1 )

Example 5. 5: Determine the linear Lagrange interpolating polynomial that passes through the
points (2,4) and (5, 1).
Solution: In this case we have
( x  x1 ) ( x  5) 1 ( x  x0 ) ( x  2) 1
 0 ( x)    ( x  5) ,  1 ( x)    ( x  2)
( x0  x1 ) (2  5)  3 ( x1  x0 ) (5  2) 3
The Lagrange linear interpolation polynomial is given by
p1 ( x) =  0 ( x) f ( x0 ) +  1 ( x) f ( x1 ) .

1 1
= ( x  5) (4) + ( x  2) (1)
3 3
=  x6

Example 5. 6: Given that f (0)  1 , f (1)  3 , f (3)  55 , find the unique polynomial of degree
2 or less, which fits the given data.
Solution: We have x0  0 , f 0  1 , x1  1 , f1  3 , x2  3 , f 2  55 . The Lagrange fundamental
polynomials are given by
( x  x1 )( x  x2 ) ( x  1)( x  3) 1 2
 0 ( x)    0 ( x)   ( x  4 x  3).
( xo  x1 )( xo  x2 ) (1)(3) 3

82
( x  xo )( x  x2 ) ( x  0)( x  3) 1
 1 ( x)    1 ( x)   (3x  x 2 ).
( x1  xo )( x1  x2 ) (1)(2) 2

( x  xo )( x  x1 ) ( x  0)( x  1) 1 2
 2 ( x)    2 ( x)   ( x  x).
( x2  xo )( x2  x1 ) (3)(2) 6
Hence, the Lagrange quadratic polynomial is given by
P2(x) =  0 ( x) f (x0) +  1 ( x) f (x1) +  2 ( x) f (x2)

1 1 1
 ( x 2  4 x  3)(1)  (3 x  x 2 )(3)  ( x 2  x)(55).
3 2 6
= 8x2 – 6x + 1.
Example 5.7: Using Lagrange interpolation formula, find the form of the function y(x) from the
following table
x 0 1 3 4

f (x ) -12 0 12 24

Solution: Since y  0 when x  1, it follow that x  1 is a factor. Let y  ( x  1) R( x)


y
Then R( x)  .We now tabulate the value of x and R (x ).
( x  1)

x 0 3 4

R(x) 12 6 8

Applying Lagrange interpolation formula to the above table, we find


( x  3)( x  4) ( x  0)( x  4) ( x  0)( x  3)
R( x)  (12)  (6)  (8)
(3)(4) (3)(1) (4)(1)
 ( x  3) ( x  4)  2 x( x  4)  2 x( x  3)

 x 2  5 x  12.
Hence the required polynomial approximate to y (x ) is given by

y (x )  ( x  1) ( x 2  5 x  12)

 x 3  6 x 2  17 x  12.

83
Exercise:

1. Given the table of values

x 150 152 154 156

f ( x)  x 12.247 12.329 12.410 12.490

evaluate √ using Lagrange’s interpolation formula.

2. The following values of the function f ( x)  sin x  cos x , are given

x 100 200 300


f (x ) 1.1585 1.2817 1.3660
Construct the quadratic Lagrange interpolating polynomial that fits the data. Hence, find


f ( 12 ) . Compare with the exact value.

3. Construct the Lagrange interpolating polynomials for the following functions

i ) f ( x)  sin(ln x) , x0 = 2.0, x1 = 2.4, x 2 = 2.6,

ii ) f ( x)  sin x  cos x , x0 = 0, x1 = 0.25, x 2 = 0.5, x3 = 1.0

4. Construct the Lagrange interpolating polynomials for the following functions


i ) f ( x)  e 2 x cos3x x0  0, x1  0.3, x2  0.6,

ii ) f ( x)  ln x x0  1, x1  1.1, x2  1.3, x3  1.4,

5.4. Numerical Differentiation

Numerical differentiation is the process of calculating the values of the derivative of a function at

some assigned values of from the given set of values . To compute , we first replace

the exact relation by the best interpolating polynomial and then differentiate
the later as many times as we desire. the choice of the interpolation formula to be used, will
depend on the assigned value of at which is desired, that means:

84
1. If the values of are equi-spaced and is required

i. Near the beginning of the table, we employ Newton's forward formula.


ii. Near the end of the table, we use Newton's backward formula.
2. If the values are not equi-spaced, we use Newton's divided difference formula to
represent the function.

Hence corresponding to each of the interpolation formulae we can derive the formula for finding
the derivative.

5.4.1 Formulae for derivatives

Consider the function which is tabulated for the equally spaced data points , that is,
, for , with their corresponding functional values , for
. Depending on these data points we can derive different numerical differentiation
formulae as follows.

1. Derivatives using forward difference formula

Newton's forward interpolation formula (p. ) is

Differentiating both sides with respect to , we have

Since , therefore

Now [ ] …………………….. (1)

( ) [ ] ……………….. (2)

again differentiating w.r.t , we get

85
( * ( * ( * ( * ( *

( * ( * [ ]

Putting , we obtain

( ) * + ……………………… (3)

Similarly,

( ) * + ………………………….. (4)

2. Derivatives using Backward difference formula

Newton's backward interpolation formula (p. ) is

∇ ∇ ∇

Differentiating both sides with respect to , we have

∇ ∇ ∇

Since , therefore

Now [∇ ∇ ∇ ] …………………….. (5)

( ) [∇ ∇ ∇ ∇ ∇ ∇ ] …………………. (7)

again differentiating w.r.t , we get

86
( * ( * ( * ( * ( *

( * ( * [ ∇ ∇ ∇ ]

Putting , we obtain

( ) *∇ ∇ ∇ ∇ ∇ + …………………. (7)

Similarly,

( ) *∇ ∇ + ……………………….. (8)

Example 5.8: Given that

1.0 1.1 1.2 1.3 1.4 1.5 1.6


7.898 8.403 8.781 9.129 9.451 9.750 10.031

Then find and at

Solution: (a) The difference table is

X Y

1.0 7.989

0.414

1.1 8.403 -0.036

0.378 0.006

1.2 8.781 -0.030 -0.002

0.348 0.004 0.002

1.3 9.129 -0.026 0.000 -0.003

87
0.322 0.004 -0.001

1.4 9.451 -0.023 -0.001

0.299 0.005

1.5 9.750 -0.018

0.281

1.6 10.031

We have

( ) [ ] ... (i) and

( ) * + ... (ii)

Here , , , etc. Substituting these values in


equations (i) and (ii), we get

( ) * +

( ) * +

(b) We use the above difference table and the backward difference operator ∇ instead of .

( ) [∇ ∇ ∇ ∇ ∇ ∇ ] ... (i) and

( ) *∇ ∇ ∇ ∇ ∇ + ... (ii)

Here , ,∇ ,∇ , etc. putting these values in (i) and (ii)


we get

88
( ) * +

( ) * +

Exercise:
1. Find and from the following table:

2. Given the following table of values of and

Find and at i) . ii)

3. The population of a certain town (as obtained from census data) is shown in the following
table:

( in thousands)
Estimate the population in the years 1966 and 1993. And also find the rate of growth of
population in 1981.

89
5.5. Numerical Integration

The process of evaluating a definite integral from a set of tabulated values of the
integrand f(x) is called numerical Integration . This process when applied to a function
of a single variable is known as quadrature.
The problem of numerical integration, like that of numerical differentiation, is solved by
representing f(x) by an interpolation formula and then integrating it between the given
limits. In this way, we can derive quadrature formula for approximate integration of
function defined by a set of numerical values only.

5.5.1. Newton-Cotes quadrature formula


b
Let I   f  x dx
a

where takes the values for

Let us divide the interval in to sub-interval of width so that , ,


, .... , . Then

Putting on the above integral gives

∫ [

By Newton's forward interpolation formula , so integrating this term by term, gives us

∫ [

( ) ( )

( ) ]

90
This is known as Newton-Cotes quadrature formula. From this general formula , we deduce the
following important quadrature rules by taking

1. Trapezoidal Rule

Putting in equ above and taking the curve through and as a straight
line i.e. a polynomial of first degree so that differences of order higher than first becomes zero,
we get

∫ ( *

Similarly

∫ ( *

.....................

∫ ( *

adding these n integrals, we obtain

∫ [ ]

This is known as the Trapezoidal rule.

2. Simpson's one-third (1/3) rule

Putting in equ above and taking the curve through , and as a


parabola i.e. a polynomial of second degree so that differences of order higher than second
vanish , we get

∫ ( *

Similarly

91

.......................

∫ , being even.

adding all these integrals, we have when n is even

∫ [

This is known as the Simpson's one-third rule or simply Simpson's rule and it is most commonly
used.

3. Simpson's three-eighth (3/8) rule

Putting in equ above and taking the curve through as a


polynomial of third order so that differences of order higher than third vanish, we get

∫ ( *

similarly

Adding all such expressions from to , where n is a multiple of 3, we obtain

∫ [

Which is known as Simpson's three-eighth rule.

92
Example 5.9:

Evaluate ∫ by using

i) Trapezoidal rule
ii) Simpson's 1/3 rule
iii) Simpson's 3/8 rule

Solution:

Divide the interval (0,6) into six equal parts each of width h=1. The values of are

given below:

i) Trapezoidal rule

∫ [ ]

[ ]

.
ii) Simpson's 1/3 rule

∫ [ ]

[ ]

.
iii) Simpson's 3/8 rule

∫ [ ]

[ ]

93
Example 5.10:

The velocity ) of a moped which starts from rest, is given at fixed intervals of time
as follows:

Estimate approximately the distance covered in 20 minutes.

Solution:

If be the distance covered in , then

, therefore

| | ∫ [ ] , by Simpson's 1/3 rule

Here , , , , , etc
.

Hence the required distance

| | .

Exercise:
1. Use trapezoidal rule to evaluate ∫ considering five sub-intervals.

2. Evaluate ∫ using

i) Trapezoidal rule taking .


ii) Simpson's 1/3 rule taking .
iii) Simpson's 3/8 rule taking .

Hence compute an approximate value of in each case.

94
3. Evaluate ∫ taking 7 ordinates by applying Simpson's 3/8 rule. And

deduce the value of .


4. Given that

Evaluate ∫ using
i) Trapezoidal rule
ii) Simpson's 1/3 rule
iii) Simpson's 3/8 rule

95

You might also like