Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
4 views55 pages

Chapter 3

Uploaded by

co23btech11018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views55 pages

Chapter 3

Uploaded by

co23btech11018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

CH 5150: Optimization Techniques I

Autumn 2024

Instructor: Dr. Kishalay Mitra


Global Optimization & Knowledge Unearthing Lab (GOKUL)
Department of Chemical Engineering
Indian Institute of Technology Hyderabad
([email protected])
https://sites.google.com/site/kishalaymitra/

Indian Institute of Technology Hyderabad


How far with analytical methods…
• If objectives, constraints are
• simple to express and
• possible to express explicitly in terms of decision variables

2 Indian Institute of Technology Hyderabad


Numerical Optimization
• Basic philosophy in any numerical optimization

Way of
calculating
search directions
and
step length are
going
to give us
different
optimization
techniques

3 Indian Institute of Technology Hyderabad


One Dimensional Optimization
• Classification

4 Indian Institute of Technology Hyderabad


Unimodality
• Function which is having only one peak (maximization) or one
valley (minimization) in a given interval
• Given 2 values of the variable on the same side of the optimum,
one nearer to the optimum gives smaller value in case of
minimization

1 exp 1 exp 2 exp


5 Indian Institute of Technology Hyderabad
Area Elimination
Unrestricted Search (how to bracket the optimum when the ranges
are not given)

Fixed Step Size Accelerated Step Size

x1 x2 x3 x4 x5 x6 x7 x1 x2 x3

6 Indian Institute of Technology Hyderabad


Area Elimination
Exhaustive Search

• Bounds on the
search space is
known
• Equally spaced n
points are placed in
the initial interval,
L0 giving rise to n+1 Initial Interval, L0
Known Known
segments
• Final interval of
Ln 2
uncertainty Ln [x5,
=
x7] having 2
segments
L0 n + 1

7 Indian Institute of Technology Hyderabad


Exhaustive Search Example

8 Indian Institute of Technology Hyderabad


Dichotomous Search
• Mid point of the interval
identified
• 2 points x1, x2 created (/2)
away from the mid point
• Based on these function
values, certain area [x2, xf ] is
eliminated
• Process repeated again

9 Indian Institute of Technology Hyderabad


Dichotomous Search Example

10 Indian Institute of Technology Hyderabad


Example…

11 Indian Institute of Technology Hyderabad


Example…

12 Indian Institute of Technology Hyderabad


Interval Halving
• Divide the initial interval into
4 equal parts using 3 points –
one mid point (x0) and 2
quarter points (x1, x2)
• Based on these function
values, certain area (50%) [x1,
x2] is eliminated
• 2 points are created again
using one of the existing
points and process repeated
again
• Interval of uncertainty
remaining at the end of n
experiments (n3 & n odd)

13 Indian Institute of Technology Hyderabad


Interval Halving Example

14 Indian Institute of Technology Hyderabad


Example…

15 Indian Institute of Technology Hyderabad


Fibonacci Search
• 50% area eliminated in every iteration using 2 function
evaluation  25% area eliminated per function evaluation
• Can we eliminate this much or more area using one function
evaluation in each iteration?
• Technique uses Fibonacci numbers to carry out this area
elimination process using one function evaluation in one
iteration

16 Indian Institute of Technology Hyderabad


Fibonacci Search
• Based on accuracy required,
n and Fn are determined, n is
total number of experiments
• 2 test points (x1, x2) are
placed at a distance L2*
from each end of L0
• Using unimodality a X b
assumption, discard area
• In the remaining area, one Initial Interval, L0
experiment is already Known Known
present, one needs to be
introduced

Next Interval, L2

17
Fibonacci Example

18 Indian Institute of Technology Hyderabad


Fibonacci Example

19
Golden Section Search
• Same as Fibonacci method except
• Number of experiments need not to be mentioned in the
beginning
• Location of first 2 experiments does not need the
information of total number of experiments – in this case
we assume we are going to conduct a large number of
experiments

20 Indian Institute of Technology Hyderabad


Golden Section Search
• Same as Fibonacci method except
• First 2 experiments are positioned by

• Desired accuracy can stop the procedure


• Ancient Greek Architects: A building having the side d
and b and following the below condition would have the
most pleasing properties

21 Indian Institute of Technology Hyderabad


Golden Section Example

22 Indian Institute of Technology Hyderabad


Comparison

23 Indian Institute of Technology Hyderabad


Quadratic Interpolation
• Uses Function values, no derivatives
• Useful for cases when derivative computation is not favorable

3 Stage approach
• Stage 1: Normalize the direction vector
• Stage 2: Apply a quadratic approximation to the given function
and find the minimum of the given function through successive
quadratic approximation approach
• Stage 3: Terminate based on different criteria

24 Indian Institute of Technology Hyderabad


QI (contd.)
Stage 1: Direction vector normalization

• Any n dimensional direction vector s = {s1, s2,… si, …, sn} can


be normalized by dividing each component by 
 = Max s i
i
• Other way:

= (s 2
1 + s 22 + ... + s 2n )

25 Indian Institute of Technology Hyderabad


QI (contd.)
Stage 2: Quadratic approximation
• f() is the univariate function to which the h() quadratic
function approximation needs to be fitted

• We need 3 points (A, B and C) to find coefficients for this


function

26 Indian Institute of Technology Hyderabad


QI (contd.)
Quadratic approximation (by making h() = 0)

• Assuming 3 points (A, B and C) as  = 0 (fA), t (fB) & 2t (fC),


where t is a trial step to be assumed ( = 0 saves one function
evaluation – next iteration onwards)

provided
h() > 0

27 Indian Institute of Technology Hyderabad


QI (contd.)
This can be ensured by
• Choose fA = f ( = 0) and
compute f1 = f ( = t0)
• If f1 > fA, fC = f1, compute fB = f
( = t0/2), compute optimum 
using t = t0/2
• If f1 < fA, fB = f1, compute f2 = f
( = 2t0)
• If f2 > f1, fC = f2 and fB = f1,
compute optimum  using t =
t0
• If f2 < f1, f2 = f1 and t = 2t0 and
repeat above 3 steps till we
find f2 > f1

28 Indian Institute of Technology Hyderabad


QI (contd.)
Stage 3: Termination
• We need to ensure that optimum  value of approximate
function h() is sufficiently close to the true optimum  value of
original function f()

• Termination criteria

• If termination criteria satisfied, stop


• Else refit a new quadratic polynomial using 3 best points out of
4 points from the previous iteration

29 Indian Institute of Technology Hyderabad


QI (contd.)
Refitting
• Selection of 3 best points out of all possible situations

30 Indian Institute of Technology Hyderabad


QI - Example

31 Indian Institute of Technology Hyderabad


QI - Example

32 Indian Institute of Technology Hyderabad


QI - Example

33 Indian Institute of Technology Hyderabad


Cubic Interpolation
• Uses Function values and derivatives
4 Stage approach
• Stage 1: Normalize the direction vector
• Stage 2: Bracket the optimum point
• Stage 3: Apply a cubic approximation to the given function and
find the minimum of the given function through successive
cubic approximation approach
• Stage 4: Terminate based on different criteria

34 Indian Institute of Technology Hyderabad


CI (contd.)
Stage 1: Direction vector normalization

• Any n dimensional direction vector s = {s1, s2,… si, …, sn} can


be normalized by dividing each component by 
 = Max s i
i
• Other way:

= (s 2
1 + s 22 + ... + s 2n )

35 Indian Institute of Technology Hyderabad


CI (contd.)
Stage 2: Bracketing optimum
• f() is univariate function – to bracket the optimum, derivative
information at 2 points checked for signs (one –ve and one
+ve)

• At  = 0 (point A), since S is assumed to be the direction of


descent

• We find one more point (point B) where the slope (df/d) is


+ve - +t0, 2t0, 4t0, 8t0 etc. till the above condition satisfied

36 Indian Institute of Technology Hyderabad


CI (contd.)
Stage 3: Cubic approximation
• f() is the univariate function to which the h() cubic function
approximation needs to be fitted

• We need 4 data (function value of A, B & derivative information


at A, B) to find coefficients

37 Indian Institute of Technology Hyderabad


CI (contd.)
Stage 3: Cubic approximation
• Application of optimization conditions leads to

A=0

38 Indian Institute of Technology Hyderabad


CI (contd.)
Stage 4: Termination
• We need to ensure that optimum  value of approximate
function h() is sufficiently close to the true optimum  value of
original function f()
• Termination criteria

• If termination criteria satisfied, stop


• Else refit a new cubic polynomial using 2 best points out of 3
points from the previous iteration

39 Indian Institute of Technology Hyderabad


CI - Example

40 Indian Institute of Technology Hyderabad


CI - Example

41 Indian Institute of Technology Hyderabad


CI - Example

42 Indian Institute of Technology Hyderabad


CI - Example

43 Indian Institute of Technology Hyderabad


Root Finding Methods
• Necessary condition to find the minimum of function f() is f()
=0
• Solving f() = 0 is numerical analysis is done by ROOT
FINDING METHODS e.g. Newton-Raphson, Quasi Newton,
secant methods etc. which is synonymous to finding the
minimum

44 Indian Institute of Technology Hyderabad


Newton Raphson
• Perform a quadratic approximation for any given function f()
around a point i for which the minimum needs to be found

• Applying the optimization criteria of first derivative going to 0

• New points are generated using

• Terminate when

45 Indian Institute of Technology Hyderabad


NR (contd.)
• Originally developed by Newton
for solving nonlinear equations,
modified by Raphson later

• Uses both, first and second order


derivatives of the function f()

• If f()  0, NR has fastest


convergence property, quadratic
convergence

• If the initial solution is not


sufficiently close to the true
optimum, NR can diverge instead
of converging

46 Indian Institute of Technology Hyderabad


NR - Example

47 Indian Institute of Technology Hyderabad


NR - Example

48 Indian Institute of Technology Hyderabad


Quasi Newton
• In case the function f() is not available in closed form or
computation of derivative is not possible analytically, we use
the finite difference form of computing derivative (e.g. central
difference – others could have been used)

• Algorithm and convergence criteria becomes

• Function evaluation required at f( i + ), f( i - ) apart from f(i)

49 Indian Institute of Technology Hyderabad


QN - Example

50 Indian Institute of Technology Hyderabad


QN - Example

51 Indian Institute of Technology Hyderabad


Secant
• Purpose is to bracket the root – so choose 2 points from 2
sides of the root – assume a straight line between them and
find the point where the straight line touches the x-axis

y − f (A) f (A) − f (B)


=
x−A A−B

f (A)(A − B)
x =A−
(f (A) − f (B) )

52 Indian Institute of Technology Hyderabad


Secant (contd.)

53 Indian Institute of Technology Hyderabad


Secant - Example

54 Indian Institute of Technology Hyderabad


Secant - Example

55 Indian Institute of Technology Hyderabad

You might also like