Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
7 views16 pages

212 Matlab

The document outlines the objectives and methodologies for two experiments in the Numerical Technique Laboratory at Bangladesh University of Engineering & Technology. Experiment 5 focuses on numerical differentiation techniques, including forward, central, and high-order difference formulas, as well as Richardson’s extrapolation for error analysis. Experiment 7 covers numerical integration methods such as composite trapezoidal and Simpson’s rules, emphasizing the need for numerical techniques when analytical solutions are impractical.

Uploaded by

tonmoy.eee.buet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views16 pages

212 Matlab

The document outlines the objectives and methodologies for two experiments in the Numerical Technique Laboratory at Bangladesh University of Engineering & Technology. Experiment 5 focuses on numerical differentiation techniques, including forward, central, and high-order difference formulas, as well as Richardson’s extrapolation for error analysis. Experiment 7 covers numerical integration methods such as composite trapezoidal and Simpson’s rules, emphasizing the need for numerical techniques when analytical solutions are impractical.

Uploaded by

tonmoy.eee.buet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Bangladesh University of Engineering & Technology

Department of Electrical & Electronic Engineering


EEE 212: Numerical Technique Laboratory

Experiment 5: Numerical Differentiation

Objectives:
 Derive and implement forward, central, and high-order difference formulas.
 Analyze truncation errors and refine estimates using Richardson’s extrapolation.

Introduction:

We are familiar with the analytical method of finding the derivative of a function when the
functional relation between the dependent variable y and the independent variable x is known.
However, in practice, most often functions are defined only by tabulated data, or the values of y
for specified values of x can be found experimentally. Also in some cases, it is not possible to find
the derivative of a function by analytical method. In such cases, the analytical process of
differentiation breaks down and some numerical process have to be invented. The process of
calculating the derivatives of a function by means of a set of given values of that function is called
numerical differentiation. This process consists in replacing a complicated or an unknown
function by an interpolation polynomial and then differentiating this polynomial as many times as
desired.

Figure 1. Graphical depiction of (a) forward, (b) backward, and (c) centered finite-divided-difference
approximations of the first derivative.

Forward Difference Formula:


All numerical differentiation are done by expansion of Taylor series

𝑓𝑓 ′′ (𝑥𝑥)ℎ2 𝑓𝑓 ′′′ (𝑥𝑥)ℎ3


𝑓𝑓(𝑥𝑥 + ℎ) = 𝑓𝑓(𝑥𝑥) + 𝑓𝑓 ′ (𝑥𝑥)ℎ + + + ⋯ … … … … … … … … … … (1)
2 6
From (1)
𝑓𝑓(𝑥𝑥 + ℎ) − 𝑓𝑓(𝑥𝑥)
𝑓𝑓 ′ (𝑥𝑥) = + 𝑂𝑂(ℎ) … … … … … . (2)

Page 2 of 5: Exp. 6
Where, O(h) is the truncation error, which consists of terms containing h and higher order terms of
h. It is called forward because we are taking a point ahead of x.
Exercise 1. Given f(x) =ex, find f ′(1) using h=10-1, 10-2,,,, upto 10-10. Find out the error in eachcase
by comparing the calculated value with exact value.
Central Difference Formula (of order O (h2)):

𝑓𝑓 ′′ (𝑥𝑥)ℎ2 𝑓𝑓 ′′′ (𝑐𝑐1 )ℎ3


𝑓𝑓(𝑥𝑥 + ℎ) = 𝑓𝑓(𝑥𝑥) + 𝑓𝑓 ′ (𝑥𝑥)ℎ + + + ⋯ … … … … … … … (3)
2 6
′ (𝑥𝑥)ℎ
𝑓𝑓 ′′ (𝑥𝑥)ℎ2 𝑓𝑓 ′′′ (𝑐𝑐2 )ℎ3
𝑓𝑓(𝑥𝑥 − ℎ) = 𝑓𝑓(𝑥𝑥) − 𝑓𝑓 + − + ⋯ … … … … … … … (4)
2 6

Using (3) and (4)


𝑓𝑓(𝑥𝑥+ℎ)−𝑓𝑓(𝑥𝑥−ℎ)
𝑓𝑓 ′ (𝑥𝑥) = 2ℎ
+ 𝑂𝑂(ℎ2 ) … … … … … … … …(5)

Where, O(h2 ) is the truncation error, which consists of terms containing h2 and higher order terms
of h.
Exercise 2. Given f(x) =ex, find f ′(1) using h=10-1, 10-2,,,, up to 10-10. Use equation (5). Find out
the error in each case by comparing the calculated value with exact value.

Central Difference Formula (of order O (h4)):

Using Taylor series expansion it can be shown that

−𝑓𝑓(𝑥𝑥 + 2ℎ) + 8𝑓𝑓(𝑥𝑥 + ℎ) − 8𝑓𝑓(𝑥𝑥 − ℎ) + 𝑓𝑓(𝑥𝑥 − 2ℎ)


𝑓𝑓 ′ (𝑥𝑥) = + 𝑂𝑂(ℎ4 ) … … … … … … … … (6)
12ℎ

Here the truncation error reduces to h4

Exercise 3. Given f(x) =sin (cos (1/x)) evaluate 𝑓𝑓 ′ (1/√2) . Start with h =1 and reduce h to 1/10 o
previous step in each step. If Dn+1 is the result in (n+1) th step and Dn is the result in nth step then
continue iteration until |Dn+1-Dn|>=|Dn-Dn-1| or |Dn-Dn-1| <tolerance. Use equation (6) for finding D.

Richardson’s Extrapolation:

So far, we've explored two methods to enhance derivative estimates using finite divided differences:
(1) reducing the step size, and (2) applying a higher-order formula that incorporates additional points.
A third method, known as Richardson extrapolation, improves accuracy by combining two derivative
estimates to generate a more precise result. We have seen that

𝑓𝑓(𝑥𝑥 + ℎ) − 𝑓𝑓(𝑥𝑥 − ℎ)
𝑓𝑓 ′ (𝑥𝑥) = + 𝑂𝑂(ℎ2 )
2ℎ
Which can be written as

Page 3 of 5: Exp. 6
𝑓𝑓(𝑥𝑥 + ℎ) − 𝑓𝑓(𝑥𝑥 − ℎ)
𝑓𝑓 ′ (𝑥𝑥) ≈ + 𝐶𝐶ℎ2
2ℎ
Or, 𝑓𝑓 ′ (𝑥𝑥) ≈ 𝐷𝐷0 (ℎ) + 𝐶𝐶ℎ2 … … … … … … … … (7)

If step size is converted to 2h

𝑓𝑓 ′ (𝑥𝑥) ≈ 𝐷𝐷0 (2ℎ) + 4𝐶𝐶ℎ2 … … … … … … … … (8)

Using (7) and (8)

4𝐷𝐷0 (ℎ) − 𝐷𝐷0 (2ℎ) −𝑓𝑓2 + 8𝑓𝑓1 − 8𝑓𝑓−1 + 𝑓𝑓−2


𝑓𝑓 ′ (𝑥𝑥) ≈ = … … … … … … … … (9)
3 12ℎ

Equation (9) is same as equation (6)


The method of obtaining a formula for f ′(x) of higher order from a formula of
lower order is called extrapolation. The general formula for Richardson’s extrapolation is

4𝑘𝑘 𝐷𝐷𝑘𝑘−1 (ℎ) − 𝐷𝐷𝑘𝑘−1 (2ℎ)


𝑓𝑓 ′ (𝑥𝑥) = 𝐷𝐷𝑘𝑘 (ℎ) + 𝑂𝑂(ℎ2𝑘𝑘+2 ) = + 𝑂𝑂(ℎ2𝑘𝑘+2 ) … … … … . . (10)
4𝑘𝑘 − 1

Algorithm for Richardson Approximation:


% Input:
% - f(x) : The input function
% - delta : Tolerance for absolute error
% - toler : Tolerance for relative error
%
% Output:
% -D : Matrix of approximate derivatives
% - err : Final absolute error
% - relerr : Final relative error
% -n : Index of the best approximation
1. err ← 1
2. relerr ← 1
3. h ←1
4. j ←1
5. D(1,1) ← (f(x + h) − f(x − h)) / (2h)
6. While relerr > toler AND err > delta AND j < 12 do
7. h←h/2
8. D(j+1, 1) ← (f(x + h) − f(x − h)) / (2h)
9. For k from 1 to j do
10. D(j+1, k+1) ← D(j+1, k) + (D(j+1, k) − D(j, k)) / (4^k − 1)
End For
11. err ← |D(j+1, j+1) − D(j, j)|
12. relerr ← (2 × err) / (|D(j+1, j+1)| + |D(j, j)| + ε)
13. j←j+1
Page 4 of 5: Exp. 6
End While

1−√5
Exercise 4. Given f(x) = sin (x 3 − 7x 2 + 6x + 8) evaluate 𝑓𝑓 ′ � �. Use Richardson's
2
extrapolation. Approximation should be accurate up to 13 decimal places.

Revised by,
Md. Samrat
(April 2025)

Page 5 of 5: Exp. 6
Bangladesh University of Engineering & Technology
Department of Electrical & Electronic Engineering
EEE 212: Numerical Technique Laboratory

Experiment 7: Numerical Integration

Objectives:
 Implement composite trapezoidal, Simpson’s 1/3, and Simpson’s 3/8 rules.
 Develop adaptive integration schemes based on error tolerance.
 Compare numerical results with exact integrals using both tabulated data and functions.

Introduction:
There are two cases in which engineers and scientists may require the help of numerical
integration technique. (1) Where experimental data is obtained whose integral may be required
and (2) where a closed form formula for integrating a function using calculus is difficult or so
complicated as to be almost useless. For example the integral
𝑥𝑥
𝑡𝑡 3
Φ(𝑡𝑡) = � 𝑡𝑡 𝑑𝑑𝑑𝑑
0 𝑒𝑒 − 1

Since there is no analytic expression for Φ(𝑥𝑥) , numerical integration technique must be
used to obtain approximate values of Φ(𝑥𝑥) .
Formulae for numerical integration called quadrature are based on fitting a polynomial
through a specified set of points (experimental data or function values of the complicated
function) and integrating (finding the area under the fitted polynomial) this
approximating function. Any one of the interpolation polynomials studied earlier may be
used.

Some of the Techniques for Numerical Integration


Trapezoidal Rule
Assume that the values of a function f (x) are given at x1, x1+h, x1+2h ……x1+nh and it is
required to find the integral of f (x) between x1 and x1+nh. The simplest technique to use would
be to fit straight lines through f(x1), f(x1+h) ……and to determine the area under this
approximating function as shown in Fig 7.1.

Fig. 7.1 Illustrating trapezoidal rule

Page 1 of 5: Exp. 7
For the first two points we can write:
𝑥𝑥1 +ℎ

� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = (𝑓𝑓 + 𝑓𝑓2 )
𝑥𝑥1 2 1

This is called first-degree Newton-Cotes formula.

From the above figure it is evident that the result of integration between 𝑥𝑥𝐼𝐼 and 𝑥𝑥𝐼𝐼 + 𝑛𝑛ℎ is nothing but
the sum of areas of some trapezoids. In equation form this can be written as:
𝑥𝑥1 +𝑛𝑛ℎ 𝑛𝑛
(𝑓𝑓𝑖𝑖 + 𝑓𝑓𝑖𝑖+1 )
� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = � ℎ
𝑥𝑥1 2
𝑖𝑖=1

The above integration formula is known as Composite Trapezoidal rule.


The composite trapezoidal rule can explicitly be written as:
𝑥𝑥1 +𝑛𝑛ℎ

� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = (𝑓𝑓 + 2𝑓𝑓2 + 2𝑓𝑓3 + ⋯ . . .2𝑓𝑓𝑛𝑛 + 𝑓𝑓𝑛𝑛+1 )
𝑥𝑥1 2 1

Simpson's 1/3 Rule


This is based on approximating the function 𝑓𝑓(𝑥𝑥) by fitting quadratics through sets of three points. For
only three points it can be written as:
𝑥𝑥1 +2ℎ

� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = (𝑓𝑓 + 4𝑓𝑓2 + 𝑓𝑓3 )
𝑥𝑥1 3 1

This is called second-degree Newton-Cotes formula.


It is evident that the result of integration between 𝑥𝑥1 and 𝑥𝑥1 + 𝑛𝑛ℎ can be written as
𝑥𝑥1 +𝑛𝑛ℎ

� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = � (𝑓𝑓 + 4𝑓𝑓𝑖𝑖+1 + 𝑓𝑓𝑖𝑖+2 )
𝑥𝑥1 3 𝑖𝑖
𝑖𝑖=1,3,5,…,𝑛𝑛−1

= (𝑓𝑓 + 4𝑓𝑓2 + 2𝑓𝑓3 + 4𝑓𝑓4 + 2𝑓𝑓5 + 4𝑓𝑓6 + ⋯ 4𝑓𝑓𝑛𝑛 + 𝑓𝑓𝑛𝑛+1 )
3 1
In using the above formula it is implied that f is known at an odd number of points (n+1 is odd,
where n is the no. of subintervals).

Simpson’s 3/8 Rule


This is based on approximating the function f(x) by fitting cubic interpolating polynomial through
sets of four points. For only four points it can be written as:
𝑥𝑥1 +3ℎ
3ℎ
� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = (𝑓𝑓 + 3𝑓𝑓2 + 3𝑓𝑓3 + 𝑓𝑓4 )
𝑥𝑥1 8 1

Page 2 of 5: Exp. 7
This is called third-degree Newton-Cotes formula. It is evident that the result of integration between 𝑥𝑥1
and 𝑥𝑥𝐼𝐼 + 𝑛𝑛ℎ can be written as
𝑥𝑥1 +𝑛𝑛ℎ

� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = � (𝑓𝑓 + 3𝑓𝑓𝑖𝑖+1 + 3𝑓𝑓𝑖𝑖+2 + 𝑓𝑓𝑖𝑖+3 )
𝑥𝑥1 3 𝑖𝑖
𝑖𝑖=1,4,7,…,𝑛𝑛−2
3ℎ
= (𝑓𝑓 + 3𝑓𝑓2 + 3𝑓𝑓3 + 2𝑓𝑓4 + 3𝑓𝑓5 + 3𝑓𝑓6 + 2𝑓𝑓7 + ⋯ + 2𝑓𝑓𝑛𝑛−2 + 3𝑓𝑓𝑛𝑛−1 + 3𝑓𝑓𝑛𝑛 + 𝑓𝑓𝑛𝑛+1 )
8 1
In using the above formula it is implied that 𝑓𝑓 is known at (𝑛𝑛 + 1) points where 𝐧𝐧 is divisible by 3 .
An algorithm for integrating a tabulated function using composite trapezoidal rule:
Remarks: 𝑓𝑓1 , 𝑓𝑓2 , … … … , 𝑓𝑓𝑛𝑛+1 are the tabulated values at 𝑥𝑥1 , 𝑥𝑥𝑙𝑙+ℎ , … … … 𝑥𝑥1+𝑛𝑛ℎ (𝑛𝑛 + 1 points)

1. Read h
2. for i = 1 to n + 1 Read f endfor
3. sum ← ( f1 + fn+1 ) / 2
4. for j = 2 to n do
5. sum ← sum + f j
endfor
6. int egral ← h . sum
7. write int egral stop

Exercise 1. Integrate the function tabulated in Table 7.1 over the interval from x=1.6 to x=3.8
using composite trapezoidal rule with (a) h=0.2, (b) h=0.4 and (c) h=0.6
Table 7.1
X f(x) X f(x)
1.6 4.953 2.8 16.445
1.8 6.050 3.0 20.086
2.0 7.389 3.2 24.533
2.2 9.025 3.4 29.964
2.4 11.023 3.6 36.598
2.6 13.468 3.8 44.701

The data in Table 7.1 are for f (x) = ex . Find the true value of the integral and compare this
with those found in (a), (b) and (c).
Exercise 2.
(a) Integrate the function tabulated in Table 7.1 over the interval from x=1.6 to x=3.6
using Simpson’s composite 1/3 rule.
(b) Integrate the function tabulated in Table 7.1 over the interval from x=1.6 to x=3.4
using Simpson’s composite 3/8 rule.

An algorithm for integrating a known function using composite trapezoidal rule:


If f(x) is given as a closed form function such as f (x) = e− x cos x and we are asked to integrate it
from x1 to x2, we should decide first what h should be. Depending on the value of h we will have
to evaluate the value of f(x) inside the program for x=x1+nh where n=0,1, 2,….n and n = (x2 − x1
) / h.
Page 3 of 5: Exp. 7
1. h ← (x₂ − x₁) / n
2. x ← x₁
3. sum ← f(x)
4. for i = 2 to n do
5. x←x+h
6. sum ← sum + 2 * f(x)
endfor
7. x ← x₂
8. sum ← sum + f(x)
9. integral ← (h / 2) * sum
10. write integral
stop

Exercise 3.
(a) Find (approximately) each integral given below using the composite trapezoidal
rule with n = 12 .
1 4
(i) ∫−1 (1 + 𝑥𝑥 2 )−1 𝑑𝑑𝑑𝑑 (ii) ∫0 𝑥𝑥 2 𝑒𝑒 −𝑥𝑥 𝑑𝑑𝑑𝑑
(b) Find (approximately) each integral given above using the Simpson’s composite 1/3
and 3/8 rules with n = 12 .

Adaptive Integration
When f(x) is a known function we can choose the value for h arbitrarily. The problem is that we
do not know a priori what value to choose for h to attain a desired accuracy (for example, for an
arbitrary h sharp picks of the function might be missed). To overcome this problem, we can start
with two subintervals, h = h1 = (x2 − x1 ) / 2 and apply either trapezoidal or Simpson’s
1/3 rule. Then we let h2 = h1 /2 and apply the formula again, now with four subintervals and
the results are compared. If the new value is sufficiently close, the process is terminated. If the 2nd
result is not close enough to the first, h is halved again and the procedure is repeated. This is
continued until the last result is close enough to its predecessor. This form of numerical
integration is termed as adaptive integration.
The no. of computations can be reduced because when h is halved, all of the old points at which
the function was evaluated appear in the new computation and thus repeating evaluation can be
avoided. This is illustrated below.

k=1
k=2

k=3

k=4

o = New points
× = Old points

Page 4 of 5: Exp. 7
An algorithm for adaptive integration of a known function using trapezoidal rule:
1. Read x₁, x₂, e // e is the allowed relative error
2. h ← x₂ − x₁
3. S ← (f(x₁) + f(x₂)) / 2
4. I₁ ← h * S // Initial integral estimate
5. i ← 1
Repeat
6. x ← x₁ + h / 2 // First new midpoint
7. for j = 1 to i do
8. S1 ← S1 + f(x)
9. x←x+h // Move to next midpoint
endfor
10. i ← 2i
11. h←h/2
12. I₀ ← I₁ // Save previous integral
13. I₁ ← h * S1 // New integral with refined h
14. until |I₁ − I₀| ≤ e * |I₁|
15. Write I₁, h, i
Stop

2
Exercise 4. Evaluate the integral of 𝑥𝑥𝑒𝑒 −2𝑥𝑥 between 𝑥𝑥 = 0 and 𝑥𝑥 = 2 using a tolerance value
sufficiently small as to get an answer within 0.1% of the true answer, 0.249916 .
Exercise 5. Evaluate the integral of sin2 (16𝑥𝑥) between 𝑥𝑥 = 0 and 𝑥𝑥 = 𝜋𝜋/2. Why the result is
erroneous? How can this be solved? (The correct result is 𝜋𝜋/4 )

Revised by,
Md. Samrat
(April 2025)

Page 5 of 5: Exp. 7
Bangladesh University of Engineering & Technology
Department of Electrical & Electronic Engineering
EEE 212: Numerical Technique Laboratory

Experiment 8: Solutions to Non-linear Equations

Objectives:
 Apply iterative root-finding methods: bisection, false position, Newton-Raphson, and the
secant method.
 Examine convergence criteria and analyze iteration errors.

Introduction:
Bisection method:

The Bisection method is one of the simplest procedures for finding root of a function in a given
interval.
The procedure is straightforward. The approximate location of the root is first determined by finding
two values that bracket the root (a root is bracketed or enclosed if the function changes sign at the
endpoints). Based on these a third value is calculated which is closer to the root than the original two
values. A check is made to see if the new value is a root. Otherwise, a new pair of brackets is
generated from the three values, and the procedure is repeated.

Consider a function 𝑑𝑑(𝑥𝑥) and let there be two values of 𝑥𝑥, 𝑥𝑥low and 𝑥𝑥𝑢𝑢𝑢𝑢 �𝑥𝑥𝑢𝑢𝑢𝑢 > 𝑥𝑥low �, bracketing a
root of 𝑑𝑑(𝑥𝑥).

Steps:

1. The first step is to use the brackets 𝑥𝑥low and 𝑥𝑥𝑢𝑢𝑢𝑢 to generate a third value that is closer to the root.
𝑥𝑥𝑙𝑙𝑙𝑙𝑙𝑙 +𝑥𝑥𝑢𝑢𝑢𝑢
This new point is calculated as the mid-point between 𝑥𝑥low and, namely 𝑥𝑥𝑚𝑚𝑚𝑚𝑚𝑚 = . The
2
Page 1 of 7: Exp. 8
method therefore gets its name from this bisecting of two values. It is also known as interval
halving method.
2. Test whether 𝑥𝑥𝑚𝑚𝑚𝑚𝑚𝑚 is a root of 𝑑𝑑(𝑥𝑥) by evaluating the function at 𝑥𝑥𝑚𝑚𝑚𝑚𝑚𝑚 .
3. If 𝑥𝑥mid is not a root,
a. If 𝑑𝑑(𝑥𝑥low ) and 𝑑𝑑(𝑥𝑥mid ) have opposite signs i.e. 𝑑𝑑(𝑥𝑥low ) ⋅ 𝑑𝑑(𝑥𝑥mid ) < 0, root is in left half of
interval.
b. If 𝑑𝑑(𝑥𝑥low ) and 𝑑𝑑(𝑥𝑥mid ) have same signs i.e. 𝑑𝑑(𝑥𝑥low ) ⋅ 𝑑𝑑(𝑥𝑥mid ) > 0, root is in right half of interval.
4. Continue subdividing until interval width has been reduced to a size ≤ 𝜀𝜀 where 𝜀𝜀 = selected 𝑥𝑥
tolerance.

Algorithm: Bisection Method:


1. Input xLower, xUpper, xTol
2. yLower ← f(xLower)
3. xMid ← (xLower + xUpper) / 2.0
4. yMid ← f(xMid)
5. iters ← 0
6. While ( (xUpper - xLower) / 2.0 > xTol )
7. iters ← iters + 1
8. If (yLower * yMid > 0) Then
9. xLower ← xMid
10. Else
11. xUpper ← xMid
End If
12. xMid ← (xLower + xUpper) / 2.0
13. yMid ← f(xMid)
EndofWhile
14. Output xMid, yMid, iters // xMid is the root approximation
Stop

Exercise 1. Find the real root of the equation 𝑑𝑑(𝑥𝑥) = 𝑥𝑥 5 + 𝑥𝑥 + 1 using Bisection Method. 𝑥𝑥low =
−1, 𝑥𝑥𝑢𝑢𝑢𝑢 = 0 and 𝜀𝜀 = selected𝑥𝑥 tolerance = 10−4 .

Note: For a given 𝑥𝑥 tolerance (epsilon), we can calculate the number of iterations directly. The number
𝑥𝑥𝑢𝑢𝑢𝑢 −𝑥𝑥 𝑥𝑥𝑢𝑢𝑢𝑢 −𝑥𝑥low
of divisions of the original interval is the smallest value of 𝑛𝑛 that satisfies: 2𝑛𝑛 low ⟨𝜀𝜀 or 2𝑛𝑛 ⟩ 𝜀𝜀

𝑥𝑥𝑢𝑢𝑢𝑢 −𝑥𝑥low
Thus 𝑛𝑛 > log 2 � �
𝜀𝜀

In our previous example, 𝑥𝑥low = −1, 𝑥𝑥𝑢𝑢𝑢𝑢 = 0 and 𝜀𝜀 = selecte d𝑥𝑥 tolerance = 10−4 . So we have 𝑛𝑛 =
14.

Page 2 of 7: Exp. 8
False-Position Method (Regula Falsi)
A shortcoming of the bisection method is that, in dividing the interval from 𝑥𝑥low to 𝑥𝑥𝑢𝑢𝑢𝑢 into equal
halves, no account is taken of the magnitude of 𝑓𝑓(𝑥𝑥𝑙𝑙𝑙𝑙𝑙𝑙 ) and 𝑓𝑓�𝑥𝑥𝑢𝑢𝑢𝑢 �. For example, if 𝑓𝑓(𝑥𝑥low ) is much
closer to zero than 𝑓𝑓�𝑥𝑥𝑢𝑢𝑢𝑢 �, it is likely that the root is closer to 𝑥𝑥low than to 𝑥𝑥𝑢𝑢𝑢𝑢 . An alternative method
that exploits this graphical insight is to join 𝑓𝑓(𝑥𝑥low ) and 𝑓𝑓�𝑥𝑥𝑢𝑢𝑢𝑢 � by a straight line. The intersection of
this line with the 𝑥𝑥 axis represents an improved estimate of the root. The fact that the replacement of
the curve by a straight line gives the false position of the root is the origin of the name, method of false
position, or in Latin, Regula Falsi. It is also called the Linear Interpolation Method.

Using similar triangles, the intersection of the straight line with the x axis can be estimated as

𝑓𝑓(𝑥𝑥low ) 𝑓𝑓�𝑥𝑥𝑢𝑢𝑢𝑢 �
=
𝑥𝑥 − 𝑥𝑥low 𝑥𝑥 − 𝑥𝑥𝑢𝑢𝑢𝑢

𝑓𝑓�𝑥𝑥𝑢𝑢𝑢𝑢 ��𝑥𝑥𝑙𝑙𝑙𝑙𝑙𝑙 −𝑥𝑥𝑢𝑢𝑢𝑢 �


That is 𝑥𝑥 = 𝑥𝑥𝑢𝑢𝑢𝑢 −
𝑓𝑓(𝑥𝑥𝑙𝑙𝑙𝑙𝑙𝑙 )−𝑓𝑓�𝑥𝑥𝑢𝑢𝑢𝑢 �

This is the False Position formulae. The value of 𝑥𝑥 then replaces whichever of the two initial guesses,
𝑥𝑥low or 𝑥𝑥𝑢𝑢𝑢𝑢 , yields a function value with the same sign as 𝑓𝑓(𝑥𝑥). In this way, the values of 𝑥𝑥𝑙𝑙𝑙𝑙𝑙𝑙 and 𝑥𝑥𝑢𝑢𝑢𝑢
always bracket the true root. The process is repeated until the root is estimated adequately.

Exercise 2. Find the root of the equation 𝑑𝑑(𝑥𝑥) = 𝑥𝑥 5 + 𝑥𝑥 + 1 using False Position Method. 𝑥𝑥low =
−1, 𝑥𝑥𝑢𝑢𝑢𝑢 = 0 and 𝜀𝜀 = selected 𝑥𝑥 tolerance = 10−4 . (Develop the algorithm by yourself. It is very
similar to Bisection Method).

Newton Raphson Method:


If 𝑓𝑓(𝑥𝑥), 𝑓𝑓 ′ (𝑥𝑥) and 𝑓𝑓 ′′ (𝑥𝑥) are continuous near a root 𝑥𝑥, then this extra information regarding the nature
of 𝑓𝑓(𝑥𝑥) can be used to develop algorithms that will produce sequences {𝑥𝑥𝑘𝑘 } that converge faster to 𝑥𝑥
than either the bisection or false position method. The Newton-Raphson (or simply Newton's) method
is one of the most useful and best-known algorithms that relies on the continuity of 𝑓𝑓 ′ (𝑥𝑥) and 𝑓𝑓 ′′ (𝑥𝑥).

Page 3 of 7: Exp. 8
The attempt is to locate root by repeatedly approximating 𝑓𝑓(𝑥𝑥) with a linear function at each step. If
the initial guess at the root is 𝑥𝑥𝑘𝑘 , a tangent can be extended from the point [𝑥𝑥𝑘𝑘 , 𝑓𝑓(𝑥𝑥𝑘𝑘 )]. The point
where this tangent crosses the 𝑥𝑥 axis usually represents an improved estimate of the root.

f (x)

Slope = f ′(xk )

f (xk )

0 xk +1 xk x

The Newton-Raphson method can be derived on the basis of this geometrical interpretation. As in the
figure, the first derivative at x is equivalent to the slope:
𝑓𝑓(𝑥𝑥𝑘𝑘 ) − 0
𝑓𝑓 ′ (𝑥𝑥𝑘𝑘 ) =
𝑥𝑥𝑘𝑘 − 𝑥𝑥𝑘𝑘+1

which can be rearranged to yield

𝑓𝑓(𝑥𝑥𝑘𝑘 )
𝑥𝑥𝑘𝑘+1 = 𝑥𝑥𝑘𝑘 −
𝑓𝑓 ′ (𝑥𝑥𝑘𝑘 )

which is called the Newton Raphson formulae.

Page 4 of 7: Exp. 8
So the Newton-Raphson Algorithm actually consists of the following steps:
1. Start with an initial guess 𝑥𝑥0 and an 𝑥𝑥-tolerance 𝜀𝜀.
𝑓𝑓(𝑥𝑥𝑘𝑘 )
2. Calculate 𝑥𝑥𝑘𝑘+1 = 𝑥𝑥𝑘𝑘 − 𝑘𝑘 = 0,1,2, …
𝑓𝑓 ′ (𝑥𝑥𝑘𝑘 )

Algorithm - Newton’s Method


1. Input x0, xTol
2. iters ← 1
3. dx ← -f(x0) / fDeriv(x0)
4. root ← x0 + dx
5. While ( Abs(dx) > xTol )
6. dx ← -f(root) / fDeriv(root)
7. root ← root + dx
8. iters ← iters + 1
End of while
9. Output root, iters
Stop
Exercise 3. Use the Newton Raphson method to estimate the root of f (x) = e− x − 1, employing an
initial guess of x0 = 0. The tolerance is = 10−8 .

The Secant Method:


The Newton-Raphson algorithm requires two functions evaluations per iteration, 𝑓𝑓(𝑥𝑥𝑘𝑘 ) and𝑓𝑓 ′ (𝑥𝑥𝑘𝑘 ) .
Historically, the calculation of a derivative could involve considerable effort. Moreover, many
functions have non-elementary forms (integrals, sums etc.), and it is desirable to have a method for
finding a root that does not depend on the computation of a derivative. The secant method does not
need a formula for the derivative and it can be coded so that only one new function evaluation is
required per iteration.
The formula for the secant method is the same one that was used in the Regula Falsi method, except
that the logical decisions regarding how to define each succeeding term are different.

In the Secant method, the derivative can be approximated by a backward finite divided difference, as
in the figure,

𝑓𝑓(𝑥𝑥𝑘𝑘−1 ) − 𝑓𝑓(𝑥𝑥𝑘𝑘 )
𝑓𝑓 ′ (𝑥𝑥𝑘𝑘 ) ≅
𝑥𝑥𝑘𝑘−1 − 𝑥𝑥𝑘𝑘

Using Newton-Raphson method,

𝑓𝑓(𝑥𝑥𝑘𝑘 )
𝑥𝑥𝑘𝑘+1 = 𝑥𝑥𝑘𝑘 −
𝑓𝑓 ′ (𝑥𝑥𝑘𝑘 )

Substituting 𝑓𝑓 ′ (𝑥𝑥𝑘𝑘 ),

Page 5 of 7: Exp. 8
𝑓𝑓(𝑥𝑥𝑘𝑘 )(𝑥𝑥𝑘𝑘−1 − 𝑥𝑥𝑘𝑘 )
𝑥𝑥𝑘𝑘+1 = 𝑥𝑥𝑘𝑘 −
𝑓𝑓(𝑥𝑥𝑘𝑘−1 ) − 𝑓𝑓(𝑥𝑥𝑘𝑘 )

Notice that the approach requires initial estimates of x

f (x)

f (xk )

f (xk −1 )

xk −1 x
xk

Algorithm - Secant Method


1. Input xₖ, xₖ₋₁, xTol, maxIter
2. iters ← 1
3. yₖ ← f(xₖ)
4. yₖ₋₁ ← f(xₖ₋₁)
5. root ← (xₖ₋₁·yₖ − xₖ·yₖ₋₁) / (yₖ − yₖ₋₁)
6. yₖ₊₁ ← f(root)
7. While |root − xₖ| > xTol and iters < maxIter do
8. xₖ₋₁ ← xₖ
9. yₖ₋₁ ← yₖ
10. xₖ ← root
11. yₖ ← yₖ₊₁
12. root ← (xₖ₋₁·yₖ − xₖ·yₖ₋₁) / (yₖ − yₖ₋₁)
13. yₖ₊₁ ← f(root)

Page 6 of 7: Exp. 8
14. iters ← iters + 1
EndofWhile
15. Output: root, yₖ₊₁, iters

Exercise 4. Find the root of the equation and 1 f (x) = 3x + sin(x) − ex , starting values are 0. The
tolerance limit is 0.0000001.

Revised by,
Md. Samrat
(April 2025)

Page 7 of 7: Exp. 8

You might also like