Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views17 pages

Unit-3 MAB103

Uploaded by

billusaanda1010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views17 pages

Unit-3 MAB103

Uploaded by

billusaanda1010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Indian Institute of Technology Roorkee

MAB-103: Numerical Methods

Unit-III Roots of non-linear equations Session-2025-26


In many scientific and engineering problems, we encounter equations of the form

f (x) = 0,

where f (x) is a nonlinear function. Unlike linear equations, which can be solved exactly using
algebraic methods, nonlinear equations often do not have closed-form solutions. For instance,
equations like x3 − x − 2 = 0 or transcendental equations such as sin(x) − x/2 = 0 cannot be solved
analytically in most cases.
Despite the lack of exact solutions, finding the roots of nonlinear equations is crucial because
these roots represent important physical quantities, such as equilibrium points in mechanical
systems, concentrations in chemical reactions, and eigenvalues in structural analysis.

Why Numerical Methods?


Given the complexity of nonlinear equations, we rely on numerical methods to approximate their
roots. Numerical methods provide systematic procedures for finding approximate solutions to
equations that cannot be solved exactly. The importance of these methods lies in their ability to:

• Handle a wide range of nonlinear equations that arise in real-world applications.

• Provide approximate solutions with controllable accuracy.

• Offer practical tools that can be implemented on computers, making them essential for solving
large-scale problems in science and engineering.

Examples of Applications
• Physics: Determining the energy levels of quantum systems involves solving nonlinear
eigenvalue problems.

• Engineering: In structural engineering, the buckling load of a column is found by solving


nonlinear equilibrium equations.

• Economics: Market equilibrium models often require solving nonlinear equations to find
prices or interest rates.

In conclusion, numerical methods for finding roots of nonlinear equations are indispensable
tools in both theoretical studies and practical applications. They enable us to solve problems that
are otherwise intractable, providing insights and solutions that drive advancements in various fields
of science and engineering.

1
0.1 Fixed-Point Iteration
0.1.1 Introduction
• The fixed-point method is an iterative technique for finding approximate solutions to nonlinear
equations of the form f (x) = 0. The method is based on rewriting the equation in the
form x = g(x), where g(x) is a function derived from f (x). The solution is then found by
iteratively applying g(x) starting from an initial guess.

• Given a nonlinear equation f (x) = 0, we rewrite it as:

x = g(x),

where g(x) is chosen such that the fixed point of g(x), denoted by x∗ , satisfies x∗ = g(x∗ ).
This x∗ is also a solution to f (x) = 0.

• The fixed-point iteration is defined by:

xn+1 = g(xn ),

where xn is the n-th approximation to the root. The process is repeated until the difference
between successive approximations is smaller than a predetermined tolerance, i.e., |xn+1 −
xn | < , for some small  > 0.

0.1.2 Convergence Criteria


For the fixed-point iteration to converge to the true root x∗ , the function g(x) must satisfy the
following conditions:

• Existence of a fixed point: There exists a point x∗ such that x∗ = g(x∗ ).

• Contraction Mapping: There exists a constant 0 < α < 1 such that for all x and y in the
interval [a, b],
|g(x) − g(y)| ≤ α|x − y|.
This ensures that the sequence {xn } will converge to x∗ .

To ensure the convergence creteria, we make three assumptions of g(x):

• a ≤ g(x) ≤ b for all a ≤ x ≤ b.

• The function g(x) is continuous.

• The iteration function g(x) is differentiable on I = [a, b]. Further, there exists a constant
0 < K < 1 such that
|g 0 (x)| ≤ K, ∀x ∈ I

2
0.1.3 Example: Solving a Nonlinear Equation
Consider the nonlinear equation:
x2 − 2 = 0.
We can rewrite this equation in the form x = g(x). One possible choice for g(x) is:

g(x) = 2.

Let’s perform the fixed-point iteration starting from an initial guess x0 = 1.5.
Iteration 1: √
x1 = g(x0 ) = 2 = 1.4142.
Iteration 2: √
x2 = g(x1 ) = 2 = 1.4142.
The process continues until the values of xn stabilize.

0.1.4 Example: Convergence Analysis


Consider g(x) = sin(x) + x2 − 1 = 0 on I = [0, 1] as an example. There are three possible choices
for the iteration function, namely,

• g1 (x) = sin−1 (1 − x2 ),
p
• g2 (x) = − 1 − sin(x),
p
• g3 (x) = 1 − sin(x).
−2
Here |g10 (x)| = | √1−x 2 | > 1 for x ∈ I. If we take g2 (x), clearly the assumption 1 is violated and

therefore is not suitable for the iteration process. It is evident that |g30 (x)| < 1.

0.1.5 Stopping Criteria


The iteration process is typically stopped when the following criterion is met:

|xn+1 − xn | < ,

where  is a small positive number representing the desired accuracy.

0.1.6 Advantages and Limitations


Advantages

• Simple to implement.

• Useful when the function g(x) is easily derived from f (x).

Limitations

• Convergence is not guaranteed for all choices of g(x).

• The method can be slow to converge, especially if |g 0 (x)| is close to 1.

3
0.1.7 Conclusion
The fixed-point method is a fundamental iterative technique for solving nonlinear equations. While
it is straightforward and useful in many scenarios, careful consideration must be given to the choice
of g(x) and the convergence criteria to ensure successful application.

0.2 The Bisection Method


0.2.1 Introduction
• The Bisection Method is a simple and robust numerical technique for finding roots of
continuous nonlinear equations of the form f (x) = 0. It is based on the Intermediate Value
Theorem and works by repeatedly narrowing down an interval that contains the root.

• The Bisection Method requires a continuous function f (x) and an interval [a, b] where the
function changes sign, i.e., f (a) · f (b) < 0. The method works by iteratively reducing the
interval until the root is approximated to within a desired tolerance.

0.2.2 Algorithm
1. Initial Interval: Choose an interval [a0 , b0 ] such that f (a0 ) · f (b0 ) < 0.
2. Midpoint Calculation: Compute the midpoint of the interval:
an + b n
cn = .
2
3. Evaluate the Function : Calculate f (cn ).

• If f (cn ) = 0, then cn is the root.

• If f (an ) · f (cn ) < 0, set bn+1 = cn and an+1 = an .

• If f (bn ) · f (cn ) < 0, set an+1 = cn and bn+1 = bn .

4. Stopping Criterion: Repeat steps 2 and 3 until the width of the interval [an , bn ] is smaller than
a given tolerance , i.e., |bn − an | < .

0.2.3 Pseudocode
Given a function f(x), and interval [a, b] where f(a) * f(b) < 0:
while (b - a) / 2 > tolerance:
c = (a + b) / 2
if f(c) == 0:
return c
elif f(a) * f(c) < 0:
b = c
else:
a = c
return (a + b) / 2

4
0.2.4 Convergence Analysis
The Bisection Method converges linearly to the root. The number of iterations required to achieve
a given tolerance  can be estimated by:
log b0 −a

0

n≥ .
log(2)
This means that with each iteration, the interval size is halved, ensuring that the error decreases
exponentially.

0.2.5 Example
Let’s apply the Bisection Method to find the root of the equation:
f (x) = x3 − 4x + 1 = 0
in the interval [1, 2].
Step 1: Initial Interval
Check the signs:
f (1) = −2, f (2) = 5.
Since f (1) · f (2) < 0, there is a root in [1, 2].
Step 2: First Iteration
Compute the midpoint:
1+2
c1 = = 1.5.
2
Evaluate the function at the midpoint:
f (1.5) = 1.53 − 4 · 1.5 + 1 = −1.375.
Since f (1) · f (1.5) < 0, update the interval to [1, 1.5].
Step 3: Second Iteration
Compute the new midpoint:
1 + 1.5
c2 = = 1.25.
2
Evaluate the function:
f (1.25) = 1.253 − 4 · 1.25 + 1 = −0.796875.
Update the interval to [1.25, 1.5].
Continue Iterating until the interval width is less than a chosen tolerance, say  = 0.001.

0.2.6 Advantages and Limitations


Advantages
• Guaranteed convergence if f (x) is continuous on [a, b] and f (a) · f (b) < 0.
• Simple and easy to implement.
Limitations
• Slow convergence rate (linear).
• Only applicable to continuous functions where a sign change occurs.
• Requires a good initial interval where the function changes sign.

5
0.2.7 Conclusion
The Bisection Method is a reliable and straightforward technique for finding roots of nonlinear
equations. While it has some limitations, its guaranteed convergence makes it a valuable tool in
numerical analysis, particularly when other methods may fail to provide a root.

0.3 Secant Method


0.3.1 Introduction
The Secant Method is used for finding roots of a function f (x) = 0. It is an improvement over the
Bisection method in terms of speed. Unlike Newton’s method, it does not require the evaluation of
the derivative of the function.

0.3.2 Methodology
Start with two initial guesses x0 and x1 . The formula for the iterative step is:
xn − xn−1
xn+1 = xn − f (xn ) ·
f (xn ) − f (xn−1 )

Continue iterating until |xn+1 − xn | is less than the desired tolerance.

Example
We want to find a root of the function f (x) = x2 − 2 using the secant method.

• Initial Guesses Let x0 = 1 and x1 = 2.

• First Iteration

f (x1 ) · (x1 − x0 )
x2 = x1 −
f (x1 ) − f (x0 )
Substituting the values:

f (x0 ) = 12 − 2 = −1, f (x1 ) = 22 − 2 = 2

2 · (2 − 1) 2 4
x2 = 2 − = 2 − = ≈ 1.3333
2 − (−1) 3 3
• Second Iteration
Now, using x1 = 2 and x2 = 43 :
 2
4 16 16 18 2
f (x2 ) = −2= −2= − =−
3 9 9 9 9

f (x2 ) · (x2 − x1 )
x3 = x2 −
f (x2 ) − f (x1 )

6
Substituting the values:

4 − 29 · 43 − 2

x3 = − ≈ 1.4
3 − 29 − 2
• Third Iteration
Continuing the iterations:

f (x3 ) · (x3 − x2 )
x4 = x3 − ≈ 1.4142
f (x3 ) − f (x2 )

After three iterations, we find that x4 ≈ 1.4142, which is a good approximation of 2.

0.3.3 Algorithm
1. Choose initial guesses x0 and x1 .
2. Compute xn+1 using the formula above.
3. Check for convergence.
4. If the convergence criterion is not met, set xn−1 = xn and xn = xn+1 , then repeat the process.

0.3.4 Advantages
• Faster than the Bisection method.
• Does not require the calculation of derivatives.

0.3.5 Disadvantages
• May not converge for poor initial guesses.
• Slower than Newton’s method if derivatives are easy to calculate.

0.4 Regula Falsi Method


0.4.1 Introduction
The Regula Falsi Method, also known as the False Position Method, is a bracketing method for
finding the roots of a function. The method combines features of the Bisection method and the
Secant method.

0.5 Methodology
Start with two initial points x0 and x1 such that f (x0 ) · f (x1 ) < 0. Compute the point where the
secant line crosses the x-axis:
f (x1 ) · (x1 − x0 )
x2 = x1 −
f (x1 ) − f (x0 )
Replace the point in the interval that has the same sign as f (x2 ), ensuring that the root remains
bracketed.

7
0.5.1 Algorithm
1. Choose initial guesses x0 and x1 .

2. Compute x2 using the formula above.

3. Check the sign of f (x2 ).

4. Update the interval by replacing x0 or x1 with x2 .

5. Repeat until the desired tolerance is achieved.

Example
We want to find a root of the function f (x) = x3 − 4x + 1 using the regular false method.

• Initial Guesses
Let a0 = 0 and b0 = 2.

• First Iteration
Calculate the function values at the endpoints:

f (a0 ) = f (0) = 03 − 4 · 0 + 1 = 1
f (b0 ) = f (2) = 23 − 4 · 2 + 1 = 8 − 8 + 1 = 1

The next approximation c1 is given by:

f (b0 ) · (b0 − a0 )
c 1 = b0 −
f (b0 ) − f (a0 )
Substituting the values:

1 · (2 − 0)
c1 = 2 − =2
1−1
Since f (a0 ) · f (c1 ) < 0, update the interval:

a1 = 0, b1 = c1 = 2

• Second Iteration
Calculate the new c2 :

f (a1 ) = f (0) = 1, f (b1 ) = f (2) = 1

f (b1 ) · (b1 − a1 )
c 2 = b1 −
f (b1 ) − f (a1 )
Substituting the values:

8
1 · (2 − 0)
c2 = 2 − =2
1−1
Update the interval:

a2 = 0, b2 = c2 = 2
• Subsequent Iterations
Continue in a similar manner until the desired accuracy is achieved.

0.5.2 Advantages
• Guarantees convergence if the initial interval is chosen correctly.
• Does not require the calculation of derivatives.

0.5.3 Disadvantages
• Can be slower than other methods like Newton’s method.
• The convergence may be slow if the function is nearly linear over the interval.

0.6 Newton-Raphson Method


0.7 Introduction
The Newton-Raphson Method is one of the most efficient methods for finding roots of a nonlinear
equation f (x) = 0. It requires the function f (x) to be differentiable and uses the tangent line at
an initial guess to approximate the root.

0.7.1 Methodology
Given an initial guess x0 , the next approximation xn+1 is found using the formula:
f (xn )
xn+1 = xn −
f 0 (xn )
This process is repeated until the difference between successive approximations is smaller than a
predefined tolerance.

0.7.2 Algorithm
1. Choose an initial guess x0 .
2. Compute xn+1 using the Newton-Raphson formula:
f (xn )
xn+1 = xn −
f 0 (xn )

3. Check for convergence, i.e., if |xn+1 − xn | < tolerance.


4. If convergence is not achieved, set xn = xn+1 and repeat the process.

9
Numerical Example: Newton-Raphson Method
We want to find the root of the equation f (x) = x3 − 2x − 5 = 0 using the Newton-Raphson
method.

Step 1: Define the Function and Its Derivative


The function is:
f (x) = x3 − 2x − 5
The derivative of the function is:
f 0 (x) = 3x2 − 2

Step 2: Initial Guess


Let’s choose an initial guess x0 = 2.

Step 3: First Iteration


Compute f (x0 ) and f 0 (x0 ):
f (2) = 23 − 2(2) − 5 = −1
f 0 (2) = 3(2)2 − 2 = 10
Apply the Newton-Raphson formula:
−1
x1 = 2 − = 2.1
10

Step 4: Second Iteration


Compute f (x1 ) and f 0 (x1 ):

f (2.1) = (2.1)3 − 2(2.1) − 5 = 0.061

f 0 (2.1) = 3(2.1)2 − 2 = 11.23


Apply the Newton-Raphson formula:
0.061
x2 = 2.1 − = 2.0946
11.23

Step 5: Third Iteration


Compute f (x2 ) and f 0 (x2 ):

f (2.0946) = (2.0946)3 − 2(2.0946) − 5 ≈ 0.0013

f 0 (2.0946) = 11.1592
Apply the Newton-Raphson formula:
0.0013
x3 = 2.0946 − = 2.09448
11.1592

10
Result
The root of the equation f (x) = x3 − 2x − 5 = 0 is approximately x ≈ 2.09448.

0.7.3 Advantages
• Quadratic Convergence: The method converges very quickly when the initial guess is
close to the actual root.

• Simplicity: The formula is easy to implement and understand.

0.7.4 Disadvantages
• Requires Derivatives: The method requires the calculation of the derivative f 0 (x), which
may not always be easy.

• Sensitivity to Initial Guess: The method may fail to converge if the initial guess is not
close to the root or if the derivative is zero or close to zero at any step.

• Possible Divergence: If the function is not well-behaved, the method may diverge or
converge to the wrong root.

0.7.5 Geometric Interpretation


The Newton-Raphson Method can be understood geometrically as using the tangent line at a point
xn to approximate the root. The next approximation xn+1 is the point where the tangent line
crosses the x-axis.

Definition: Order of Convergence for Iterative Root-Finding Methods


The order of convergence of an iterative method quantifies how fast the sequence of approxima-
tions x0 , x1 , x2 , . . . converges to the exact root α.
It is defined by the behavior of the errors en = |xn − α| as n → ∞.

Mathematical Definition
An iterative method is said to converge with order p ≥ 1 if there exists a positive constant C
(called the asymptotic error constant) such that:

|en+1 |
lim =C
n→∞ |en |p

In simpler terms, for sufficiently large n, the error at the next iteration is approximately
proportional to the p-th power of the current error:

|en+1 | ≈ C · |en |p
The value of p tells you the rate of convergence:

11
Order (p) Common Name Interpretation
p=1 Linear The number of correct decimal places increases by a constant
amount each step. Error reduces as en+1 ≈ C · en .
p=2 Quadratic The number of correct decimal places roughly doubles each
step. Error reduces as en+1 ≈ C · e2n .
p=3 Cubic The number of correct decimal places roughly triples each
step. Error reduces as en+1 ≈ C · e3n .
1<p<2 Superlinear Faster than linear but not yet quadratic. Error reduces
as en+1 ≈ C · epn . The Secant Method is a key example
(p ≈ 1.618).

Important Note on p = 1 (Linear): For linear convergence, the constant C must be strictly
less than 1 (0 < C < 1). If C ≥ 1, the method may not converge or will converge too slowly to be
useful.

Why is Order Important?


The order p is the most important measure of an iterative method’s efficiency because:

• It predicts speed. A higher-order method will eventually converge much faster than a
lower-order one, requiring far fewer iterations to achieve the same precision.

• It helps choose the right method. For simple roots, Newton’s method (order 2) is
preferred over the Fixed-Point method (order 1). For multiple roots, the Modified Newton’s
method (order 2) is preferred over the standard Newton’s method (which drops to order 1).

Examples of Methods and Their Orders

Method Iteration Formula Typical Order (p) Condition

Bisection N/A (bracketing) 1 (Linear) C = 21


Fixed-Point Iteration xn+1 = g(xn ) 1 (Linear) If g 0 (α) 6= 0
f (x )
Newton-Raphson xn+1 = xn − f 0 (xn ) 2 (Quadratic) Simple root (f 0 (α) 6= 0), good initial guess.
n
For a multiple root of order m, it drops to p = 1.
f (xn )
Modified Newton’s xn+1 = xn − m · f 0 (xn )
2 (Quadratic) For a root of known multiplicity m.
x −x
Regula Falsi xn+1 = xn − f (xn ) f (x n)−f n−1
(xn−1 )
1 (Linear) Less expensive than Newton’s.
n

Convergence of the Fixed-Point Iteration


Consider the nonlinear equation
f (x) = 0,
which we rewrite in fixed-point form
x = g(x).
The fixed-point iteration is then

xk+1 = g(xk ), k = 0, 1, 2, . . .

12
Theorem 1 (Convergence of Fixed-Point Iteration). Suppose g : [a, b] → [a, b] is continuous and
satisfies a Lipschitz condition

|g(x) − g(y)| ≤ L|x − y|, ∀x, y ∈ [a, b],

with a constant 0 ≤ L < 1. Then:

1. There exists a unique fixed point x∗ ∈ [a, b] such that g(x∗ ) = x∗ .

2. For any starting value x0 ∈ [a, b], the sequence {xk } defined by xk+1 = g(xk ) converges to x∗ .

3. The convergence is at least linear:

|xk+1 − x∗ | ≤ L|xk − x∗ |, k = 0, 1, 2, . . .

Moreover,
∗ Lk
|xk − x | ≤ |x1 − x0 |.
1−L
Proof. Since g maps [a, b] into itself and is a contraction with constant L < 1.
To show convergence, note that

|xk+1 − x∗ | = |g(xk ) − g(x∗ )| ≤ L|xk − x∗ |.

By induction,
|xk − x∗ | ≤ Lk |x0 − x∗ |, k = 0, 1, 2, . . .
which tends to zero as k → ∞ since 0 ≤ L < 1. Thus xk → x∗ with at least linear rate of
convergence.

Remark 1. If g 0 (x∗ ) = 0 and g is continuously differentiable, the convergence is superlinear


(quadratic if g 00 exists).

Convergence of Newton’s Method


Newton’s method is an iterative scheme for solving the nonlinear equation

f (x) = 0,

given by
f (xk )
xk+1 = xk − , k = 0, 1, 2, . . .
f 0 (xk )
Theorem 2 (Quadratic Convergence of Newton’s Method). Let f ∈ C 2 ([a, b]), and suppose
x∗ ∈ (a, b) satisfies f (x∗ ) = 0 with f 0 (x∗ ) 6= 0. If the initial guess x0 is sufficiently close to x∗ ,
then the sequence {xk } defined by Newton’s method converges to x∗ . Moreover, the convergence is
quadratic:
|xk+1 − x∗ | ≤ C|xk − x∗ |2 ,
for some constant C > 0 when k is large enough.

13
Proof. By Taylor’s theorem, expand f (x∗ ) about xk :

f (x∗ ) = f (xk ) + f 0 (xk )(x∗ − xk ) + 21 f 00 (ξk )(x∗ − xk )2 ,

for some ξk between xk and x∗ . Since f (x∗ ) = 0 and deviding by f 0 (xk ), this reduces to

f (xk ) ∗ f 00 (ξk )
0= + (x − x k ) + (xk − x∗ )2 .
f 0 (xk ) 2f 0 (xk )

Using Newton Rapshon formula gives

∗ f 00 (ξk )
x − xk+1 =− 0 (xk − x∗ )2 .
2f (xk )

Since f 0 (x∗ ) 6= 0 and f 0 is continuous, f 0 (xk ) remains bounded away from 0 near x∗ . Also, f 00 is
continuous and bounded on [a, b]. Hence there exists C > 0 such that

|xk+1 − x∗ | ≤ C|xk − x∗ |2 ,

which shows quadratic convergence.

Convergence of the Regula Falsi Method


Theorem 3. Let f : [a, b] → R be continuous with f (a)f (b) < 0. Then the sequence {ck } generated
by the Regula Falsi method converges to a root x∗ ∈ [a, b]. The convergence is guaranteed and at
least linear.

Proof. At each step, define


ak f (bk ) − bk f (ak )
ck = .
f (bk ) − f (ak )
By construction, f (ak )f (bk ) < 0 for all k, so the root is always bracketed in [ak , bk ].
The update rule ensures:
( (
ak , f (ak )f (ck ) < 0, ck , f (ak )f (ck ) < 0,
ak+1 = bk+1 =
ck , f (ck )f (bk ) < 0, bk , f (ck )f (bk ) < 0.

Hence {ak } is monotone nondecreasing and bounded above, while {bk } is monotone nonincreasing
and bounded below. Thus both converge: ak → α, bk → β with α ≤ β.
By continuity of f , we must have f (α)f (β) ≤ 0. Since the interval shrinks around a single
point, α = β = x∗ is a root.
Finally, because the method relies on linear interpolation, the improvement in the approximation
satisfies
|ck+1 − x∗ | ≤ q |ck − x∗ |, 0 < q < 1,
which shows at least linear convergence.

Remark 2. In contrast, the secant method achieves superlinear convergence, and Newton’s method
achieves quadratic convergence under suitable smoothness assumptions. The Regula Falsi method is
slower but always safe because it maintains bracketing.

14
The Problem with the Standard Newton-Raphson Method
The standard Newton-Raphson method uses the iteration:
f (xn )
xn+1 = xn −
f 0 (xn )
It converges quadratically (very fast) to a root α if f 0 (α) 6= 0 (a simple root).
However, if α is a multiple root of multiplicity m > 1 (i.e., f (α) = f 0 (α) = · · · = f (m−1) (α) = 0,
but f (m) (α) 6= 0), the standard method struggles. Its convergence becomes only linear, and the
error constant is (1 − m1 ), which gets worse as m increases.
Why? Because both f (xn ) and f 0 (xn ) approach zero as xn approaches the root α. Their ratio
f (xn )/f 0 (xn ) does not go to zero as quickly, slowing down the convergence.

The Solution: Modified Newton-Raphson Method


The modified method restores quadratic convergence by explicitly accounting for the root’s
multiplicity m.
The Modified Iteration Formula:
f (xn )
xn+1 = xn − m ·
f 0 (xn )
How does it work? The factor m compensates for the shallow slope of f 0 (x) near the multiple
root. It effectively "jumps" the correct distance towards the root, just as the standard method
does for simple roots.
When to use it?
• When you know the multiplicity m of the root in advance.

• If you suspect a multiple root and can estimate m.

Detailed Explanation: Why the Modification Works


Let α be a root of multiplicity m. Near α, the function can be approximated by:

f (x) ≈ (x − α)m · g(x)

where g(α) 6= 0.
Now, let’s compute the derivative (using the product rule):

f 0 (x) ≈ m(x − α)m−1 · g(x) + (x − α)m · g 0 (x)

Let’s see what the standard Newton step calculates:


f (x) (x − α)m · g(x) (x − α) · g(x)
0
≈ 0
=
f (x) m(x − α)m−1 m
· g(x) + (x − α) · g (x) m · g(x) + (x − α) · g 0 (x)
As x → α, (x − α) → 0 and g(x) → g(α), so:
f (x) (x − α) · g(α) (x − α)
0
≈ =
f (x) m · g(α) m

15
Therefore, the standard Newton step is:

(xn − α)
xn+1 ≈ xn −
m
1
The error is reduced by a factor of (1 − m
) each time (linear convergence).
Now, let’s apply the modified step:

f (xn ) (xn − α)
xn+1 = xn − m · ≈ x n − m · = xn − (xn − α) = α
f 0 (xn ) m

The modified step cancels the error perfectly in one step (in the approximation), leading to
quadratic convergence.

A Complete Example
Let’s solve f (x) = x4 − 6.75x2 + 6.25x − 1.5 = 0. We know (from factoring) that x = 0.5 is a
double root (multiplicity m = 2).
Let’s use both the standard and modified methods starting from x0 = 0.6.
Define the function and its derivative:

f (x) = x4 − 6.75x2 + 6.25x − 1.5

f 0 (x) = 4x3 − 13.5x + 6.25

Standard Newton-Raphson Method (m = 1)

Iteration (n) xn f (xn ) f 0 (xn ) xn+1 = xn − f /f 0


0 0.6 -0.0504 -0.986 0.5489
1 0.5489 -0.00667 -0.586 0.5375
2 0.5375 -0.00167 -0.437 0.5337
3 0.5337 -0.000416 -0.362 0.5325
4 0.5325 -0.000104 -0.322 0.5322

The method is converging linearly to α = 0.5 (the true root). The error is reducing slowly.
After 4 iterations, it’s still at 0.5322.

Modified Newton-Raphson Method (m = 2)

Iteration (n) xn f (xn ) f 0 (xn ) xn+1 = xn − 2 · f /f 0


0 0.6 -0.0504 -0.986 0.6 - 2*(0.0511) = 0.4978
1 0.4978 ∼0.0000 ∼0.0000 ∼0.5

Explanation of Iteration 0:

1. f (0.6) = (0.6)4 − 6.75(0.6)2 + 6.25(0.6) − 1.5 = 0.1296 − 2.43 + 3.75 − 1.5 = −0.0504

2. f 0 (0.6) = 4(0.6)3 − 13.5(0.6) + 6.25 = 4(0.216) − 8.1 + 6.25 = 0.864 − 8.1 + 6.25 = −0.986

16
f −0.0504
3. f0
= −0.986
≈ 0.0511

4. x1 = 0.6 − 2 · (0.0511) = 0.6 − 0.1022 = 0.4978

The modified method jumps to 0.4978 in a single step, landing almost directly on the true root
α = 0.5. A second iteration would refine this to an even more accurate value. This demonstrates
the restored quadratic convergence.

What if the Multiplicity m is Unknown?


You can still use a modified method! One common approach is to use the following iteration, which
doesn’t require knowing m:

g(xn ) f (xn )f 0 (xn )


xn+1 = xn − 0 = xn − 0 ,
g (xn ) [f (xn )]2 − f (xn )f 00 (xn )

where g(x) = f (x)/f 0 (x). This formula also achieves quadratic convergence for multiple roots but
requires calculating the second derivative f 00 (x).

Summary
Feature Standard Newton-Raphson Modified Newton-Raphson
Root Type Simple Roots (m = 1) Multiple Roots (m > 1)
Convergence Quadratic Quadratic
f (xn ) f (xn )
Iteration Formula xn − 0 xn − m · 0
f (xn ) f (xn )
Requirement f 0 (x) 6= 0 at root Know multiplicity m

The Modified Newton-Raphson method is a crucial tool for efficiently solving equations where
roots are not simple, ensuring fast convergence where the standard method would be unacceptably
slow.

17

You might also like