Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
97 views40 pages

UNIT 3idjlska

The document discusses dynamic programming, an algorithm design technique that breaks complex problems down into simpler subproblems. It describes how dynamic programming avoids recomputing solutions by storing results of already solved subproblems. Examples provided include calculating the Fibonacci sequence and applying the technique of top-down and bottom-up approaches.

Uploaded by

2023bardai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views40 pages

UNIT 3idjlska

The document discusses dynamic programming, an algorithm design technique that breaks complex problems down into simpler subproblems. It describes how dynamic programming avoids recomputing solutions by storing results of already solved subproblems. Examples provided include calculating the Fibonacci sequence and applying the technique of top-down and bottom-up approaches.

Uploaded by

2023bardai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

UNIT-III

Module 3
DYNAMIC PROGRAMMING

General method, Matrix-chain multiplication, All pairs shortest path, Optimal binary search trees, 0/1
Knapsack problem, Traveling salesperson problem, Flow shop scheduling.

DYNAMIC PROGRAMMING
Dynamic Programming is a method to solve the given problem by taking sequence of decisions. In order
to get the optimal solution of the given problem. We should write the possible decisions by using principal of
optimality; we will select the optimal solution of the given problem.
The difference between greedy method and dynamic programming is, In greedy method we take
only one decision to find out the solution. Whereas in dynamic programming we take sequence of decisions,
which satisfy the condition and finally we get the optimal solution by using principal of optimality.
Dynamic programming is a technique that breaks the problems into sub-problems, and saves the result for future
purposes so that we do not need to compute the result again. The subproblems are optimized to optimize the overall
solution is known as optimal substructure property.
The main use of dynamic programming is to solve optimization problems. Here, optimization problems mean that
when we are trying to find out the minimum or the maximum solution of a problem. The dynamic programming
guarantees to find the optimal solution of a problem if the solution exists.
The definition of dynamic programming says that it is a technique for solving a complex problem by first breaking
into a collection of simpler subproblems, solving each subproblem just once, and then storing their solutions to
avoid repetitive computations.

General method
• Dynamic Programming is an algorithm design method used when solution to a particular problem can be
viewed as a result of a sequence of decisions.
• Dynamic Programming Drastically reduces the amount of enumeration by eliminating those sequences
which cannot be optimal.
• In Dynamic programming optimal sequence of decisions is found by following the principle of optimality.
Principal of optimality:

An optimal sequence of decisions the property that whatever the initial state and decision are the
remaining decisions must constitute an optimal decision sequence with regard to the state resulting
from the first decision.

Consider an example of the Fibonacci series. The following series is the Fibonacci series:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ,…
The numbers in the above series are not randomly calculated. Mathematically, we could write each of the terms
using the below formula:
F(n) = F(n-1) + F(n-2),
With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the above relationship.
For example, F(2) is the sum f(0) and f(1), which is equal to 1.
How can we calculate F(20)?
The F(20) term will be calculated using the nth formula of the Fibonacci series. The below figure shows that how
F(20) is calculated.

As we can observe in the above figure that F(20) is


calculated as the sum of F(19) and F(18).
In the dynamic programming approach, we try to divide
the problem into the similar subproblems.
We are following this approach in the above case where F(20) into the similar subproblems, i.e., F(19) and F(18).
If we recap the definition of dynamic programming that it says the similar subproblem should not be computed
more than once. Still, in the above case, the subproblem is calculated twice. In the above example, F(18) is
calculated two times; similarly, F(17) is also calculated twice. However, this technique is quite useful as it solves
the similar subproblems, but we need to be cautious while storing the results because we are not particular about
storing the result that we have computed once, then it can lead to a
wastage of resources.
In the above example, if we calculate the F(18) in the right subtree, then it leads to the tremendous usage of
resources and decreases the overall performance.
The solution to the above problem is to save the computed results in an array.
First, we calculate F(16) and F(17) and save their values in an array. The F(18) is calculated by summing the
values of F(17) and F(16), which are already saved in an array. The computed value of F(18) is saved in an array.
The value of F(19) is calculated using the sum of F(18), and F(17), and their values are already saved in an array.
The computed value of F(19) is stored in an array. The value of F(20) can be calculated by adding the values of
F(19) and F(18), and the values of both F(19) and F(18) are stored in an array. The final computed value of F(20)
is stored in an array.
The following are the steps that the dynamic programming follows:
• It breaks down the complex problem into simpler subproblems.
• It finds the optimal solution to these sub-problems.
• It stores the results of sub-problems (memorization). The process of storing the results
of subproblems is known as memorization.
• It reuses them so that same sub-problem is calculated more than once.
• Finally, calculate the result of the complex problem.

The dynamic programming is applicable that are having properties such as:
Those problems that are having overlapping subproblems and optimal substructures.
Here, optimal substructure means that the solution of optimization problems can be
obtained by simply combining the optimal solution of all the subproblems.
In the case of dynamic programming, the space complexity would be increased as we are
storing the intermediate results, but the time complexity would be decreased.

Approaches of dynamic programming :

There are two approaches to dynamic programming:


• Top-down approach
• Bottom-up approach
Top-down approach
The top-down approach follows the memorization technique, while bottom-up approach
follows the tabulation method. Here memorization is equal to the sum of recursion and
caching. Recursion means calling the function itself, while caching means storing the
intermediate results.
Advantages
• It is very easy to understand and implement.
• It solves the subproblems only when it is required.
• It is easy to debug
It uses the recursion technique that occupies more memory in the call stack. Sometimes when the recursion is too
deep, the stack overflow condition will occur. It occupies more memory that degrades the overall performance.
Let's understand dynamic programming through an example.
int fib(int n)
{
if(n<0)
error;
if(n==0)
return 0;
if(n==1)
return 1;
sum = fib(n-1) + fib(n-2);
}

In the above code, we have used the recursive approach to find out the Fibonacci series.
When the value of 'n' increases, the function calls will also increase, and computations will also increase. In this
case, the time complexity increases exponentially, and it becomes 2n.
One solution to this problem is to use the dynamic programming approach. Rather than generating the recursive
tree again and again, we can reuse the previously calculated value. If we use the dynamic programming
approach, then the time complexity would be O(n).
When we apply the dynamic programming approach in the implementation of the Fibonacci
series, then the code would look like:
static int count = 0;
int fib(int n)
{
if(memo[n]!= NULL)
return memo[n];
count++;
if(n<0)
error;
if(n==0)
return 0;
if(n==1)
return 1;
sum = fib(n-1) + fib(n-2);
memo[n] = sum;
}

}
In the above code, we have used the memorization technique in which we store the results in an array
to reuse the values. This is also known as a top-down approach in which we move from the top and
break the problem into sub-problems.
Bottom-Up approach
The bottom-up approach is also one of the techniques which can be used to implement the dynamic
programming. It uses the tabulation technique to implement the dynamic programming approach. It solves the
same kind of problems but it removes the recursion. If we remove the recursion, there is no stack overflow issue
and no overhead of the recursive
functions. In this tabulation technique, we solve the problems and store the results in a matrix. We use iterative
method used in bottom up approach.
The bottom-up is the approach used to avoid the recursion, thus saving the memory space. The bottom-up is an
algorithm that starts from the beginning, whereas the recursive algorithm starts from the end and works
backward. In the bottom-up approach, we start from the base case to find the answer for the end. As we know,
the base cases in the Fibonacci series are 0 and 1. Since the bottom approach starts from the base cases, so we
will start from 0 and 1.
Key points
• We solve all the smaller sub-problems that will be needed to solve the larger sub
problems then move to the larger problems using smaller sub-problems.
• We use for loop to iterate over the sub-problems.
• The bottom-up approach is also known as the tabulation or table filling method.
int fib(int n)
{
int A[];
A[0] = 0, A[1] = 1;
for( i=2; i<=n; i++)
{
A[i] = A[i-1] + A[i-2]
}
return A[n];
}

Steps for Dynamic programming :


1 : Characteristics the given problem by using mathematical equation which gives the solution (or) Sub-
solution for given problem.
2: Recursively identify the value of the optimal solution.
3 : By using backtracking calculate the optimal solution.
4 : Finalize the optimal solution from computing information.

Applications of Dynamic Programming


1 : Matrix Chain Multiplication
2 : All Pairs Shortest Path Problem
3 : Travelling Sales Person Problem.
4 : 0 /1 Knapsack Problem -
5 : Optimal Binary Search Tree ( OBST)
6 : Flow Shop Scheduling
MATRIX CHAIN MULTIPLICATION
Input: n matrices A1, A2, A3….An of dimensions P1xP2, P2xp3, ….PnxPn+1 respectively.
Goal: To compute the matrix product A1A2…An.
Problem: In what order should A1A2…An be multiplied so that it would take the minimum
number of computations to derive the product.
Let A and B be two matrices of dimensions pxq and qxr. Then C=AB. C is of dimension pxr.
Thus Cij takes q scalar multiplications and q-1 scalar additions.
Consider an example of the best way of multiplying 3 matrices.
Let A1 of dimensions 5x4
A2 of dimensions 4x6
A3 of dimensions 6x2
(A1 A2) A3 takes (5x4x6) + (5x6x2) = 180
A1 (A2 A3) takes (5x4x2) + (4x6x2) = 88
Thus A1 (A2 A3) is much cheaper to compute than (A1 A2) A3, although both lead to the same
final answer. Hence optimal cost is 88.
To solve this problem using dynamic programming method, we will perform the following
steps.
Step 1: Let Mij denote the cost of multiplying Ai…Aj where the cost is measured in the number of multiplications.
Here, M(i,i) = 0 for all i and M(1,n) is required solution.
Step 2: The sequence of decisions can be build using the principle of optimality. Consider the process of matrix
chain multiplication.
Let T be the tree corresponding to the optimal way of multiplying Ai…Aj .
T has a left sub-tree L and right sub-tree R. L corresponds to multiplying Ai…Ak and R to multiplying Ak+1…Aj.
for some integer k such that (i<=k<=j-1).
Thus we get optimal sub-chains of matrices and then the multiplication is performed. This ultimately proves that
the matrix chain multiplication follows the principle of optimality.
Step 3: We will apply following formula for computing each sequence.

To keep track of optimal subsolutions, we store the value of k in a table s[i, j]. Recall, kistheplace at which we
split the product Ai..j to get an optimal parenthesization.
That is, s[i, j] = k such that m[i, j] = m[i, k] + m[k + 1, j] + pi − 1 . pk . pj

The basic algorithm of matrix chain multiplication:-


// Matrix A[i] has dimension dims[i-1] x dims[i] for i = 1..n
MatrixChainMultiplication(int dims[])
{
// length[dims] = n + 1
n = dims.length - 1;
// m[i,j] = Minimum number of scalar multiplications(i.e., cost)
// needed to compute the matrix A[i]A[i+1]...A[j] = A[i..j]
// The cost is zero when multiplying one matrix
for (i = 1; i <= n; i++)
m[i, i] = 0;
for (len = 2; len <= n; len++)
{
// Subsequence lengths
for (i = 1; i <= n - len + 1; i++)
{
j = i + len - 1;
m[i, j] = MAXINT;
for (k = i; k <= j - 1; k++)
{
cost = m[i, k] + m[k+1, j] + dims[i-1]*dims[k]*dims[j];
if (cost < m[i, j])
{
m[i, j] = cost;
s[i, j] = k;
// Index of the subsequence split that achieved minimal cost
}
}
}
}
}

Step 4:
OPTIMAL BINARY SEARCH TREE ( OBST ) : The given set of identifiers { a1, a2,…., an} with
a1<a2<a3…an. Let p(i) be the probability with which we can search for ai. Let q(i) be the probability that the
identifier x being searched. such that ai < x < ai + 1 and 0 ≤ i ≤ n. In other words p(i) is the probability of successful
search and q(i) be the probability of unsuccessful search.
Clearly ∑ p(i) + ∑ q(i) then obtain a tree with minimum cost. Such a tree with optimum Cost is
1≤i≤n 1≤i≤n
called optimal binary search tree.
To solve this problem using dynamic programming method by using following formulas.
1 : c ( i, j ) = min { c( i , k - 1 ) + c( k , j ) + w ( i , j ) }
i<k≤j
2 : w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
3 : r( i , j ) = k
Example1 : Using algorithm OBST compute w(i,j), r(i,j) and c(i,j), 0 ≤ i ≤ j ≤ 4 for the identifier set ( a1,
a2, a3, a4 ) = ( do, while, for, if ) with ( p1, p2, p3, p4 ) = ( 3, 3, 1, 1 ) and ( q0, q1, q2,
q3, q4 ) = ( 2, 3, 1, 1, 1 ) using r ( i , j ) construct the optimal binary search tree.

Solution :
Successful Probability : ( p1, p2, p3, p4 ) = ( 3, 3, 1, 1 )
UnSuccessful Probability : ( q0, q1, q2, q3, q4 ) = ( 2, 3, 1, 1, 1 )
identifier set : ( a1, a2, a3, a4 ) = (do, while, for, if) Initial
Conditions :
w(i,j)= q(i)
c(i,j) = 0
r(i,j) = 0
Formulas :
1. w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
2. c ( i, j ) = min { c( i , k - 1 ) + c( k , j ) + w ( i , j ) }
i<k≤j
3. r( i , j ) = k

Step1 : j – i = 0

w ( i , j ) = q(i)
w(0,0) = q(0)= 2 c(0,0) = 0 r(0,0) = 0
w(1,1) = q(1)= 3 c(1,1) = 0 r(1,1) = 0
w(2,2) = q(2)= 1 c(2,2) = 0 r(2,2) = 0
w(3,3) = q(3)= 1 c(3,3) = 0 r(3,3) = 0
w(4,4) = q(4)= 1 c(4,4) = 0 r(4,4) = 0
Step 2 : j – i = 1 , ( i =0, j = 1, k = 1 )
w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
w ( 0, 1 ) = p(1) + q(1) + w(0,0)
=3 + 3 + 2 = 8
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 0,1) = min { c(0,0)+c(1,1)} + w(0,1)
= min { 0 +0 } + 8
= 0+8 = 8
r(0,1) = 1
( i =1, j = 2, k = 2 )
w ( 1, 2 ) = p(2) + q(2) + w(1,1)
=3+1+3=7
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c(1,2) = min { c(1,1)+c(2,2)} + w(1,2)
= min { 0 + 0 } + 7= 7
r(1,2) = 2
( i =2, j = 3, k = 3 )
w ( 2, 3 ) = p ( 3 ) + q(3) + w(2,2)
=1+1+1=3
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 2,3) = min { c(2,2) + c(3,3)} + w(2,3)
= min { 0 + 0 } + 3 = 3
r(2,3) = 3
3

( i =3, j = 4, k = 4 )
w ( 3, 4 ) = p(4)+q(4)+w(3,3)
=1+1+1=3
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 3,4) = min { c(3,3) + c( 4,4) } + w(3,4)
= min { 0 + 0} + 3 = 3
r(3,4) = 4
Step 3 : j – i = 2 , ( i =0, j= 2, k= 1,2 )

w( 0,2 ) = p(2) + q(2) + w ( 0,1)


= 3 + 1 + 8 = 12
c(0,2) = min { c(0,0)+c(1,2), c(0,1)+c(2,2)} + w(0,2)
= min { 0 + 7 , 8 + 0 } + 12
= min { 7, 8 } + 12
= 7 + 12 = 19
r(0,2) = 1

( i =1, j= 3, k=2,3 )

w(1,3) = p(3) + q(3) + w( 1,2)


=1+1+7=9
c(1,3) = min{c(1,1)+c(2,3), c(1,2) +c(3,3) } + w(1,3)
=min { 0 + 3, 7 + 0 } + 9
= min { 3, 7 } + 9
= 3 + 9 = 12
r(1,3) = 2
( i =2, j= 4, k=3 )

w(2,4) = p(4) + q(4) + w( 2, 3)


=1+1+3=5
c(2,4) = min { c(2,2)+c(3,4)} + w(2,4)
=min{ 0 + 3 } + 5 = 8
r(2,4) = 3
Step 4 : j – i = 3 , ( i =0, j= 3, k=1,2,3 )
w(0,3) = p(3) + q(3) + w(0,2)
= 1 + 1 + 12 = 14
c(0,3) = min { c(0,0)+c(1,3), c(0,1)+c(2,3), c(0,2) +c(3,3) } + w(0,3)
= min { 0 + 12, 8 + 3 , 19 + 0 } + 14
= min { 12, 11, 19 } + 14
= 11 + 14 = 25
r(0,3) = 2
( i =1, j= 4, k=2,3,4)
w(1,4) = p(4) + q(4) + w( 1,3)
= 1 + 1 + 9 = 11
c(1,4) = min { c(1,1)+c(2,4), c(1,2)+c(3,4), c(1,3)+c(4,4) } + w(1,4)
= min { 0 + 8, 7 + 3, 12 + 0 } + 11
= min { 8, 10, 12 } + 11
= 8 + 11 = 19
r(1,4) = 2
Step 5 : j – i = 4 (i =0, j= 4, k=1,2,3,4)

w(0,4) = p(4) + q(4) + w(0,3)


= 1 + 1 + 14 = 16
c(0,4) = min { c(0,0)+c(1,4), c(0,1)+c(2,4), c(0,2)+c(3,4), c(0,3)+c(4,4)} + w( 0,4)
=min{ 0 + 19, 8+8, 19+3 , 25+0} + 16
=min{19, 16, 21, 25 }+16
=16+16= 32
r(0,4) = 2
To build Optimal Binary Search Tree ( OBST )

for a set( i, j ) j w ( 0, 0 ) = 2 w ( 1, 1 ) = 3 w ( 2, 2 ) = 1 w ( 3, 3 ) = 1 w ( 4, 4 ) = 1
c ( 0, 0 ) = 0 c (1, 1) = 0 c (2, 2) = 0 c (3, 3) = 0 c (4, 4) = 0
r ( 0, 0 ) = 0 r (1, 1 ) = 0 r (2, 2 ) = 0 r (3, 3 ) = 0 r (4, 4 ) = 0
–i=0

w ( 0, 1 ) = 8 w ( 1, 2 ) = 7 w ( 2, 3 ) = 3 w ( 3, 4 ) = 3

j–i=1 c ( 0,1 ) = 8 c (1, 2) = 7 c (2, 3) = 3 c ( 3, 4 ) = 3


r ( 0, 1 ) = 1 r (1, 2 ) = 2 r (2, 3 ) = 3 r ( 3, 4 ) = 4

w ( 0, 2 ) = 12 w (1, 3) = 9 w (2, 4) = 5

j–i=2 c ( 0, 2 ) = 19 c (1, 3 ) = 12 c (2, 4 ) = 8

r ( 0, 2 ) = 1 r (1, 3 ) = 2 r (2, 4 ) = 3

w ( 0, 3 ) = 14 w (1, 4 ) = 11

j–i=3 c ( 0, 3 ) = 25 c (1, 4 ) = 19

r ( 0, 3 ) = 2 r (1, 4 ) = 2

w ( 0, 4 ) = 16

j–i=4 c ( 0, 4 ) = 32

r ( 0, 4 ) = 2
To build OBST, r ( 0,4) = 2, K =2.
Hence a2 becomes root node.
r(i,j) = k

r(i,k-1) r(k,j)

r(0,4)=2

r(i,k-1) r(k,j)
r(0,1)=1 r(2,4)= 3

r(i,k-1) r(k,j) r(i,k-1) r(k,j)


r(0,0) = 0 r(1,1) = 0 r( 2,2) =0 r(3,4) = 4

r(i,k-1) r(k,j)
r(3,3)=0 r(4,4) = 0
( a1, a2, a3, a4 ) = ( do, while, for, if )

while

do for

if

Optimal Binary Search Tree with cost = 32


Example 2 : Using the algorithm OBST, compute W(i,j), R(i,j) and C(i,j), 0<=i<j<=4 for the identifier set
(a1,a2,a3,a4)=(end, goto, print, stop) with p(1)=1/20, p(2)=1/5, p(3)=1/10, p(4)=1/20; q(0)=1/5, q(1)=1/10,
q(2)=1/5, q(3)=1/20 and q(4)=1/20. Using the R(i,j)’s construct the OBST.

Solution :
Successful Probability P(1)
= 1/20 * 20 = 1 P(2)
= 1/5 * 20 =4
P(3) = 1/10 * 20 = 2
P(4) = 1/20 * 20 = 1
( p1,p2,p3,p4 ) = ( 1, 4, 2, 1 )
UnSuccessful Probability
q(0)=1/5 * 20 = 4
q(1)=1/10 * 20 = 2
q(2)=1/5 * 20 = 4
q(3)=1/20 * 20=1
q(4)=1/20* 20=1
(q0,q1,q2,q3,q4) = ( 4,2,4,1,1 )
(a1,a2,a3,a4)=(end, goto, print, stop)
Initial Conditions :
w(i,j)= q(i)
c(i,j) = 0
r(i,j) = 0
Formulas :

1. w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
2. c ( i, j ) = min { c( i , k - 1 ) + c( k , j ) + w ( i , j ) }
i<k≤j
3. r( i , j ) = k
Step1 : j – i = 0

w ( i , j ) = q(i)
w(0,0) = q(0)= 4 c(0,0) = 0 r(0,0) = 0
w(1,1) = q(1)= 2 c(1,1) = 0 r(1,1) = 0
w(2,2) = q(2)= 4 c(2,2) = 0 r(2,2) = 0
w(3,3) = q(3)= 1 c(3,3) = 0 r(3,3) = 0
w(4,4) = q(4)= 1 c(4,4) = 0 r(4,4) = 0

Step 2 : j – i = 1 , ( i =0, j = 1, k = 1 )

w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
w ( 0, 1 ) = p(1) + q(1) + w(0,0)
=1 + 2 + 4 = 7
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 0,1) = min { c(0,0)+c(1,1)} + w(0,1)
= min { 0 +0 } + 7
= 0+7 = 7
r(0,1) = 1
( i =1, j = 2, k = 2 )
w ( 1, 2 ) = p(2)+q(2)+w(1,1)
= 4 + 4 + 2 = 10
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c(1,2) = min { c(1,1)+c(2,2)} + w(1,2)
= min { 0 + 0 } + 10= 10
r(1,2) = 2
( i =2, j = 3, k = 3 )
w ( 2, 3 ) = p ( 3 ) + q(3) + w(2,2)
=2+1+4=7
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 2,3) = min { c(2,2) + c(3,3)} + w(2,3)
= min { 0 + 0 } + 7 = 7
r(2,3) = 3
( i =3, j = 4, k = 4 )
w ( 3, 4 ) = p(4)+q(4)+w(3,3)
=1+1+1=3
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 3,4) = min { c(3,3) + c( 4,4) } + w(3,4)
= min { 0 + 0} + 3 = 3
r(3,4) = 4
Step 3 : j – i = 2 , ( i =0, j= 2, k= 1,2 )

w( 0,2 ) = p(2) + q(2) + w ( 0,1)


= 4 + 4 + 7 = 15
c(0,2) = min { c(0,0)+c(1,2), c(0,1)+c(2,2)} + w(0,2)
= min { 0 + 10 , 7 + 0 } + 15
= min { 10, 7 } + 15
= 7 + 15 = 22
r(0,2) = 2
( i =1, j= 3, k=2,3 )

w(1,3) = p(3) + q(3) + w( 1,2)


= 2 + 1 + 10 = 13
c(1,3) = min{c(1,1)+c(2,3), c(1,2) +c(3,3) } + w(1,3)
=min { 0 + 7, 10 + 0 } + 14
= min { 7, 10 } + 13
= 7+13 = 20
r(1,3) = 2
( i =2, j= 4, k=3 )
w(2,4) = p(4) + q(4) + w( 2, 3)
=1+1+7=9
c(2,4) = min { c(2,2)+c(3,4)} + w(2,4)
=min{ 0 + 3 }+9 = 12
r(2,4) = 3
Step 4 : j – i = 3 , ( i =0, j= 3, k=1,2,3 )
w(0,3) = p(3) + q(3) + w(0,2)
= 2 + 1 + 15 = 18
c(0,3) = min { c(0,0)+c(1,3), c(0,1)+c(2,3), c(0,2) +c(3,3) } + w(0,3)
= min { 0 + 20, 7 + 7 , 22 + 0 } + 18
= min { 17,14,22} + 18
= 14 + 18 = 32
r(0,3) = 2
( i =1, j= 4, k=2,3,4)
w(1,4) = p(4) + q(4) + w( 1,3)
= 1 + 1 + 13 = 15
c(1,4) = min { c(1,1)+c(2,4), c(1,2)+c(3,4), c(1,3)+c(4,4) } + w(1,4)
= min { 0 + 12, 10 + 3, 20 + 0 } + 15
= min { 12, 13, 20 } + 16
= 12 + 15 = 27
r(1,4) = 2

Step 5 : j – i = 4 (i =0, j= 4, k=1,2,3,4)


w(0,4) = p(4) + q(4) + w(0,3)
= 1 + 1 + 18 = 20
c(0,4) = min { c(0,0)+c(1,4), c(0,1)+c(2,4), c(0,2)+c(3,4), c(0,3)+c(4,4)} + w( 0,4)
=min{ 0 + 27, 7+12, 22+3 , 32+0} + 21
=min{27, 19, 25, 32 }+20
=19+20= 39
r(0,4) = 2
To build Optimal Binary Search Tree ( OBST )

w ( 0, 0 ) = 4 w ( 1, 1 ) = 2 w ( 2, 2 ) = 4 w ( 3, 3 ) = 1 w ( 4, 4 ) = 1
for a set( i, j ) j
c ( 0, 0 ) = 0 c (1, 1) = 0 c (2, 2) = 0 c (3, 3) = 0 c (4, 4) = 0
–i=0
r ( 0, 0 ) = 0 r (1, 1 ) = 0 r (2, 2 ) = 0 r (3, 3 ) = 0 r (4, 4 ) = 0

w ( 0, 1 ) = 7 w ( 1, 2 )= 10 w ( 2, 3 ) = 7 w ( 3, 4 ) = 3

j–i=1 c ( 0,1 ) = 7 c (1, 2) = 10 c (2, 3) = 7 c ( 3, 4 ) = 3

r ( 0, 1 ) = 1 r (1, 2 ) = 2 r (2, 3 ) = 3 r ( 3, 4 ) = 4

w ( 0, 2 ) = 15 w (1, 3) = 13 w (2, 4) = 9

j–i=2 c ( 0, 2 ) = 22 c (1, 3 ) = 20 c (2, 4 ) = 12

r ( 0, 2 ) = 2 r (1, 3 ) = 2 r (2, 4 ) = 3

w ( 0, 3 ) = 18 w (1, 4 ) = 15

j–i=3 c ( 0, 3 ) = 21 c (1, 4 ) = 27

r ( 0, 3 ) = 2 r (1, 4 ) = 2

w ( 0, 4 ) = 20

j–i=4 c ( 0, 4 ) = 39

r ( 0, 4 ) = 2
To build OBST, r ( 0,4) = 2, K =2.
Hence a2 becomes root node.
r(i,j) = k

r(i,k-1) r(k,j)

r(0,4)=2

r(i,k-1) r(k,j)
r(0,1)=1 r(2,4)= 3

r(i,k-1) r(k,j) r(i,k-1) r(k,j)


r(0,0) = 0 r(1,1) = 0 r( 2,2) =0 r(3,4) = 4

r(i,k-1) r(k,j)
r(3,3)=0 r(4,4) = 0
( a1, a2, a3, a4 )=( end, goto, print, stop )

goto

end print

Stop

Optimal Binary Search Tree with cost = 39


Algorithm OBST(p, q, n)
{
for i = 0 to n-1 do
{
w[i,i] = q[i];
c[i,i] = 0;
r[i,i] = 0;
w[i,i+1] = q[i]+q[i+1]+p[i+1];
r[i,i+1] = i+1;
c[i,i+1] = q[i]+q[i+1]+p[i+1];
}
w[n,n] = q[n]; c[n,n] = 0; r[n,n] = 0;
for m = 2 to n do
for i = 0 to n - m do
{
j=i+m;
w[i,j] = w[i,j-1]+p[j]+q[j];
k=Find(c, r, i, j);
c[i,j] = c[i,k-1]+c[k,j]+w[i, j];
r[i,j] = k;
}
Write(c[0,n], w[0,n], r[0,n]);
}

Algorithm for finding optimal solution

Algorithm Find(c, r, i, j)
{
min = ∞
for m = r[i, j-1] to r[i+1, j] do
if ( c[i,m-1] + c[m,j] < min then
{
min = c[i, m-1] + c[m, j];
l=m;
}
return l;
}

Time complexity: The computing time for above algorithm is O(n²). To


construct obst from r[i,j] is O(n). So total time to construct obst is O(n³).
ALL PAIRS SHORTEST PATHS PROBLEM: (Floyd-Warshalls Algorithm)
Let G = ( V, E ) be a directed graph consisting of n vertices and each edge is associated with a weight. The
problem of finding the shortest path between all pairs of vertices in a graph is called all pairs shortest path
problem.
This problem can be solved by using Dynamic Programming technique. The all pairs shortest path
problem is to determine a matrix A such that A( i , j ) is the length of a shortest path from vertex i to vertex j.
Assume that this path contains no cycles. If k is an intermediate vertex on this path, then the sub paths from i to
k and from k to j are the shortest paths from i to k and k to j respectively. Otherwise the path I to j is not shortest
path. If k is intermediate vertex with higest index, then the path i to k is the shortest path going through no vertex
with index greater than k – 1. Similarly the path k to j is the shortest path going through no vertex with index
greater than k – 1.

The shortest path can be computed using following recursive method.


Ak ( i, j ) = w ( i, j ), if k = 0
= min { Ak – 1 ( i, j ), Ak – 1 ( i, k ) + Ak – 1 ( k, j ) } , if k ≥ 1.
Ex : find the shortest path between all pairs of node in the following graph.
5

1 2
1
4 3
2

3 4
6

Sol :
5

1 2
1
4 3
2

3 4
6

Weight matrix for this undirected graph is as follows.


0 5 4 1
1
5 0 2 3
2 = A0
4 2 0 6
3
1 3 6 0
4

The formula for finding the all pairs shortest path problem as follows,
Ak ( i, j ) = min { Ak – 1 ( i, j ), Ak – 1 ( i, k ) + Ak – 1 ( k, j ) }
Where if k ≥ 1.
Ak , number of iterations to be taken .
These iterations depends on the number of weights of vertices given, in our problem, we have 5 vertices. we will
take 4 iterations and we will always start from A0 is the given matrix. So we have to find A1 , A2 , A3 A4 for the
shortest path. According to the formula.
K= 1
A1 ( 1, 1 ) = min { A1 – 1 ( 1, 1 ), A1– 1 ( 1, 1 ) + A1 – 1 ( 1, 1 ) }
= min { 0, 0 + 0 } = 0
A ( 1, 2 ) = min { A1 – 1 ( 1, 2 ), A1– 1 ( 1, 1 ) + A1 – 1 ( 1, 2 ) }
1

= min { 5, 0 + 5 } = 5
A ( 1, 3 ) = min { A1 – 1 ( 1, 3 ), A1– 1 ( 1, 1 ) + A1 – 1 ( 1, 3 ) }
1

= min { 4, 0 + 4 } = 4
A ( 1, 4 ) = min { A1 – 1 ( 1, 4 ), A1– 1 ( 1, 1 ) + A1 – 1 ( 1, 4 ) }
1

= min { 1, 0 + 1 } = 1
A ( 2, 1 ) = min { A1 – 1 ( 2, 1 ), A1– 1 ( 2, 1 ) + A1 – 1 ( 1, 1 ) }
1

= min { 5, 5 + 0 } = 5
A1 ( 2, 2 ) = min { A1 – 1 ( 2, 2 ), A1– 1 (2, 1 ) + A1 – 1 ( 1, 2 ) }
= min { 0,5 + 5 } = 0
A1 ( 2, 3 ) = min { A1 – 1 ( 2, 3 ), A1– 1 ( 2, 1 ) + A1 – 1 ( 1, 3 ) }
= min { 2, 5 + 4 } = 2
A1 ( 2, 4 ) = min { A1 – 1 ( 2, 4 ), A1– 1 ( 2, 1 ) + A1 – 1 ( 1, 4 ) }
= min { 3, 5 + 1 } = 3
A1 ( 3, 1 ) = min { A1 – 1 ( 3, 1 ), A1– 1 ( 3, 1 ) + A1 – 1 ( 1, 1 ) }
= min { 4, 4 + 0 } = 4
A1 ( 3, 2 ) = min { A1 – 1 ( 3, 2 ), A1– 1 (3, 1 ) + A1 – 1 ( 1, 2 ) }
= min { 2,4 + 5 } = 2
A ( 3, 3 ) = min { A1 – 1 ( 3, 3 ), A1– 1 ( 3, 1 ) + A1 – 1 ( 1, 3 ) }
1

= min { 0, 4 + 4 } = 0
A ( 3, 4 ) = min { A1 – 1 ( 3, 4 ), A1– 1 ( 3, 1 ) + A1 – 1 ( 1, 4 ) }
1

= min { 6, 4 + 1 } = 5
A ( 4, 1 ) = min { A1 – 1 ( 4, 1 ), A1– 1 ( 4, 1 ) + A1 – 1 ( 1, 1 ) }
1

= min { 1, 1 + 0 } = 1
A ( 4, 2 ) = min { A1 – 1 ( 4, 2 ), A1– 1 (4, 1 ) + A1 – 1 ( 1, 2 ) }
1

= min { 3,1 + 5 } = 3
A ( 4, 3 ) = min { A1 – 1 ( 4, 3 ), A1– 1 ( 4, 1 ) + A1 – 1 ( 1, 3 ) }
1

= min { 6, 1 + 4 } = 5
A1 ( 4, 4 ) = min { A1 – 1 ( 4, 4 ), A1– 1 ( 4, 1 ) + A1 – 1 ( 1, 4 ) }
= min { 0, 1 + 1 } = 0
0 5 4 1
5 0 2 3

A1 = 4 2 0 5
1 3 5 0

K= 2
A2 ( 1, 1 ) = min { A2 – 1 ( 1, 1 ), A2– 1 ( 1, 2 ) + A2 – 1 ( 2, 1 ) }
= min { 0, 5 + 5 } = 0
A2 ( 1, 2 ) = min { A2 – 1 ( 1, 2 ), A2– 1 ( 1, 2 ) + A2 – 1 ( 2, 2 ) }
= min { 5, 5 + 0 } = 5
A2 ( 1, 3 ) = min { A2 – 1 ( 1, 3 ), A2– 1 ( 1, 2 ) + A2 – 1 ( 2, 3 ) }
= min { 4, 5 + 2 } = 4
A2 ( 1, 4 ) = min { A2 – 1 ( 1, 4 ), A2– 1 ( 1, 2 ) + A\2 – 1 ( 2, 4 ) }
= min { 1, 5 + 3 } = 1
A2 ( 2, 1 ) = min { A2 – 1 ( 2, 1 ), A2– 1 ( 2, 2 ) + A2 – 1 ( 2, 1 ) }
= min { 5, 0 + 5 } = 5
A2 ( 2, 2 ) = min { A2 – 1 ( 2, 2 ), A2– 1 (2, 2 ) + A2 – 1 ( 2, 2 ) }
= min { 0,0 + 0 } = 0
A2 ( 2, 3 ) = min { A2 – 1 ( 2, 3 ), A2– 1 ( 2, 2 ) + A2 – 1 ( 2, 3 ) }
= min { 2, 0 + 2 } = 2
A ( 2, 4 ) = min { A2 – 1 ( 2, 4 ), A2– 1 ( 2, 2 ) + A2 – 1 ( 2, 4 ) }
2

= min { 3, 0 + 3} = 3
A ( 3, 1 ) = min { A2 – 1 ( 3, 1 ), A2– 1 ( 3, 2 ) + A2 – 1 ( 2, 1 ) }
2

= min { 4, 2 + 5 } = 4
A ( 3, 2 ) = min { A2 – 1 ( 3, 2 ), A2– 1 (3, 2 ) + A2 – 1 ( 2, 2 ) }
2

= min { 2,2 + 0 } = 2
A ( 3, 3 ) = min { A2 – 1 ( 3, 3 ), A2– 1 ( 3, 2 ) + A2 – 1 ( 2, 3 ) }
2

= min { 0, 2 + 2 } = 0
A2 ( 3, 4 ) = min { A2 – 1 ( 3, 4 ), A2– 1 ( 3, 2 ) + A2 – 1 ( 2, 4 ) }
= min { 5, 2 + 3 } = 5
A2 ( 4, 1 ) = min { A2 – 1 ( 4, 1 ), A2– 1 ( 4, 2 ) + A2 – 1 ( 2, 1 ) }
= min { 1, 3 + 5 } = 1
A2 ( 4, 2 ) = min { A2 – 1 ( 4, 2 ), A2– 1 (4, 2 ) + A2 – 1 ( 2, 2 ) }
= min { 3,3 + 0 } = 3
A2 ( 4, 3 ) = min { A2 – 1 ( 4, 3 ), A2– 1 ( 4, 2 ) + A2 – 1 ( 2, 3 ) }
= min { 5, 3 + 2 } = 5
A2 ( 4, 4 ) = min { A2 – 1 ( 4, 4 ), A2– 1 ( 4, 2 ) + A2 – 1 ( 2, 4 ) }
= min { 0, 3 + 3 } = 0

0 5 4 1
5 0 2 3

A2 = 4 2 0 5

1 3 5 0

K= 3
A3 ( 1, 1 ) = min { A3 – 1 ( 1, 1 ), A3– 1 ( 1, 3 ) + A3 – 1 ( 3, 1 ) }
= min { 0, 4 + 4 } = 0
A3 ( 1, 2 ) = min { A3 – 1 ( 1, 2 ), A3– 1 ( 1, 3 ) + A3 – 1 ( 3, 2 ) }
= min { 5, 5 + 0 } = 5
A ( 1, 3 ) = min { A3 – 1 ( 1, 3 ), A3– 1 ( 1, 3 ) + A3 – 1 ( 3, 3 ) }
3

= min { 4, 5 + 2 } = 4
A ( 1, 4 ) = min { A3 – 1 ( 1, 4 ), A3– 1 ( 1, 3 ) + A\3 – 1 ( 3, 4 ) }
3

= min { 1, 4 + 5 } = 1
A ( 2, 1 ) = min { A3 – 1 ( 2, 1 ), A3– 1 ( 2, 3 ) + A3 – 1 ( 3, 1 ) }
3

= min { 5, 4 + 4 } = 5
A ( 2, 2 ) = min { A3 – 1 ( 2, 2 ), A3– 1 (2, 3 ) + A3 – 1 ( 3, 2 ) }
3

= min { 0,2 + 2 } = 0
A ( 2, 3 ) = min { A3 – 1 ( 2, 3 ), A3– 1 ( 2, 3 ) + A3 – 1 ( 3, 3 ) }
3

= min { 2, 2 + 0 } = 2
A3 ( 2, 4 ) = min { A3 – 1 ( 2, 4 ), A3– 1 ( 2, 3 ) + A3 – 1 ( 3, 4 ) }
= min { 3, 2 + 5} = 3
A3 ( 3, 1 ) = min { A3 – 1 ( 3, 1 ), A3– 1 ( 3, 3 ) + A3 – 1 ( 3, 1 ) }
= min { 4, 0 + 4 } = 4
A3 ( 3, 2 ) = min { A3 – 1 ( 3, 2 ), A3– 1 (3, 3 ) + A3 – 1 ( 3, 2 ) }
= min { 2,0 + 2 } = 2
A3 ( 3, 3 ) = min { A3 – 1 ( 3, 3 ), A3– 1 ( 3, 3 ) + A3 – 1 ( 3, 3 ) }
= min { 0, 0 + 0 } = 0
A3 ( 3, 4 ) = min { A3 – 1 ( 3, 4 ), A3– 1 ( 3, 3 ) + A3 – 1 ( 3, 4 ) }
= min { 5, 0 + 5} = 5
A3 ( 4, 1 ) = min { A3 – 1 ( 4, 1 ), A3– 1 ( 4, 3 ) + A3 – 1 ( 3, 1 ) }
= min { 1, 5 + 4 } = 1
A3 ( 4, 2 ) = min { A3 – 1 ( 4, 2 ), A3– 1 (4, 3 ) + A3 – 1 ( 3, 2 ) }
= min { 3,5 + 2 } = 3
A ( 4, 3 ) = min { A3 – 1 ( 4, 3 ), A3– 1 ( 4, 3 ) + A3 – 1 ( 3, 3 ) }
3

= min { 5, 5 + 0 } = 5
A ( 4, 4 ) = min { A3 – 1 ( 4, 4 ), A3– 1 ( 4, 3 ) + A3 – 1 ( 3, 4 ) }
3

= min { 0, 5 + 5} = 0

0 5 4 1
5 0 2 3
A3 = 4 2 0 5

1 3 5 0

K= 4
A4 ( 1, 1 ) = min { A4 – 1 ( 1, 1 ), A4– 1 ( 1, 4 ) + A4 – 1 ( 4, 1 ) }
= min { 0, 1 + 1 } = 0
A ( 1, 2 ) = min { A4 – 1 ( 1, 2 ), A4– 1 ( 1, 4 ) + A4 – 1 ( 4, 2 ) }
4

= min { 5, 1 + 3 } = 4
A ( 1, 3 ) = min { A4 – 1 ( 1, 3 ), A4– 1 ( 1, 4 ) + A4 – 1 ( 4, 3 ) }
4

= min { 4, 1 + 5 } = 4
A ( 1, 4 ) = min { A4 – 1 ( 1, 4 ), A4– 1 ( 1, 4 ) + A\4 – 1 ( 4, 4 ) }
4

= min { 1, 1 + 0 } = 1
A4 ( 2, 1 ) = min { A4 – 1 ( 2, 1 ), A4– 1 ( 2, 4 ) + A4 – 1 ( 4, 1 ) }
= min { 5, 3 + 1 } = 4
A4 ( 2, 2 ) = min { A4 – 1 ( 2, 2 ), A4– 1 ( 2, 4 ) + A4 – 1 ( 4, 2 ) }
= min { 0, 3 + 3 } = 0
A4 ( 2, 3 ) = min { A4 – 1 ( 2, 3 ), A4– 1 ( 2, 4 ) + A4 – 1 ( 4, 3 ) }
= min { 2, 3 + 5 } = 2
A4 ( 2, 4 ) = min { A4 – 1 ( 2, 4 ), A4– 1 ( 2, 4 ) + A\4 – 1 ( 4, 4 ) }
= min { 3, 3 + 0 } = 3
A4 ( 3, 1 ) = min { A4 – 1 ( 3, 1 ), A4– 1 ( 3, 4 ) + A4 – 1 ( 4, 1 ) }
= min { 4, 5 + 1 } = 4
A4 ( 3, 2 ) = min { A4 – 1 ( 3, 2 ), A4– 1 ( 3, 4 ) + A4 – 1 ( 4, 2 ) }
= min { 2, 5 + 3 } = 2
A ( 3, 3 ) = min { A4 – 1 ( 3, 3 ), A4– 1 ( 3, 4 ) + A4 – 1 ( 4, 3 ) }
4

= min { 0, 5 + 5 } = 0
A ( 3, 4 ) = min { A4 – 1 ( 3, 4 ), A4– 1 ( 3, 4 ) + A\4 – 1 ( 4, 4 ) }
4

= min { 5, 5 + 0 } = 5
A ( 4, 1 ) = min { A4 – 1 ( 4, 1 ), A4– 1 ( 4, 4 ) + A4 – 1 ( 4, 1 ) }
4

= min { 1, 0 + 1 } = 1
A ( 4, 2 ) = min { A4 – 1 ( 4, 2 ), A4– 1 ( 4, 4 ) + A4 – 1 ( 4, 2 ) }
4

= min { 3, 0 + 3 } = 3
A4 ( 4, 3 ) = min { A4 – 1 ( 4, 3 ), A4– 1 ( 4, 4 ) + A4 – 1 ( 4, 3 ) }
= min { 5, 0 + 5 } = 5
A4 ( 4, 4 ) = min { A4 – 1 ( 4, 4 ), A4– 1 ( 4, 4 ) + A\4 – 1 ( 4, 4 ) }
= min { 0, 0 + 0 } = 0

0 4 4 1

4 0 2 3
A4 =
4 2 0 5

1 3 5 0
Travelling Salesperson Problems : If there are n cities and cost of traveling from one city to other city is given.
A salesman has to start from any one of the city and has to visit all the cities exactly once and has to return to the
starting place with shortest distance or minimum cost.
Travelling Sales person problem can be computed following recursive method.

1 : g(i, Φ ) = Ci , 1
2 : g(i, S) = min { Cij + g( j, S – { j } }

Here g(i, S) means i is starting node and the nodes in S are to be traversed. min is considered as the
intermediate node g( j , S – { j } ) means j is already traversed. So next we have to traverse S – { j } with j as
starting point.

Example 1 : Construct an optimal travelling sales person tour using Dynamic Programming.

Solution : The formula for solving this problem is,


1 : g(i, Φ ) = Ci 1
2 : g(i, S) = min { Ci j + g( j, S – { j } }
Step1 : Consider set of 0 elements, such that
S=Φ
g(i, Φ ) = Ci 1
g(1, Φ ) = C1 1 = 0
g(2, Φ ) = C2 1 = 5
g(3, Φ ) = C3 1 = 6
g(4, Φ ) = C4 1 = 8
Step2 : Consider set of 1 elements, such that S =
1, {2} , { 3 } , { 4 }
g( i, S ) = min { Ci j + g( j, S – { j })

g( 2, {3} ) = min { c23 + g( 3, {3} – {3}) }


= min { c23 + g( 3, Φ ) }
= min { 13 + 4 } = 17.

g( 2, {4} ) = min { c24 + g( 4, {4} – {4}) }


= min { c24 + g( 4, Φ ) }
= min { 6 + 10 } = 16.
g( 3, {4} ) = min { c34 + g( 4, {4} – {4}) }
= min { c34 + g( 4, Φ ) }
= min { 18 + 10 } = 28.
g( 3, {2} ) = min { c32 + g( 2, {2} – {2}) }
= min { c32 + g( 2, Φ ) }
= min { 9 + 11 }
= 20.

g( 4, {2} ) = min { c42 + g( 2, {2} – {2}) }


= min { c42 + g( 2, Φ ) }
= min { 3 + 11 } = 14.
g( 4, {3} ) = min { c43 + g( 3, {3} – {3}) }
= min { c43 + g( 3, Φ ) }
= min { 2 + 4 } = 6.
Step 3 : Consider set of 2 elements, such that S

=2, { 2,3} { 2,4 }, { 3,4}

g(i, S) = min { Ci j + g( j, S – { j }) }
g( 2, { 3,4 } ) = min { c23 + g( 3, {3,4 } – {3}) , c24 + g( 4, {3,4 } – {4}) }
= min { c23 + g( 3, {4}), c24 + g(4, {3}) }
= min { 13 + 28, 6 + 6 }
= min { 41, 12 }
= 12.

g( 3, { 2,4 } ) = min { c32 + g( 2, {2,4 } – {2}) , c34 + g( 4, {2,4 } – {4}) }


= min { 9 + g( 2, {4}), 18 + g(4, {2}) }
= min { 9 + 16, 18 + 14 }
= min { 25, 32 }
= 25.
g( 4, { 2,3 } ) = min { c42 + g( 2, {2,3 } – {2}) , c43 + g( 3, {2,3 } – {3}) }
= min { 3 + g( 2, {3}), 2 + g(3, {2}) }
= min { 3 + 17, 2 + 20 }
= min { 20, 22 }
= 20.

Step 4: Consider set of 4 elements, such that S =

3, { 2,3,4 }

g( 1, { 2,3,4 } ) = min{ c12 + g( 2,{2,3,4 }–{2}) , c13 + g( 3,{2,3,4}–{3}), c14 + g( 4,{2,3,4}–{4}) }


= min { c12 + g( 2, {3,4}), 5 + g(3, {2,4}), 7 + g(4, {2,3}) }
= min { 12 + 12, 5 + 25, 7 + 20 }
= min { 24, 30, 27 }
= 24

The optimal Solution is : c12 + g ( 2, { 3,4 } ),


c12 + c24 + g(4, {3}),
c12 + c24 + c43 + g( 3, Φ ),
c12 + c24 + c43 + C3 1
The optimal Cost : 1-2-4-3-1 = 12 + 6 + 2 + 4 = 24
Example 2 : Construct an optimal travelling sales person tour using Dynamic Programming.

Solution : The formula for solving this problem is,


1 : g(i, Φ ) = Ci 1
2 : g(i, S) = min { Ci j + g( j, S – { j } }
Step1 : Consider set of 0 elements, such that
S=Φ
g(i, Φ ) = Ci 1
g(1, Φ ) = C1 1 = 0
g(2, Φ ) = C2 1 = 5
g(3, Φ ) = C3 1 = 6
g(4, Φ ) = C4 1 = 8
Step2 : Consider set of 1 elements, such that S =
1, {2} , { 3 } , { 4 }
g(i, S) = min { Ci j + g( j, S – { j }) }
g(2,{3}) = min { c23 + g( 3, { 3 } – { 3 } ) }
= min { c23 + g( 3, Φ ) }
= min { 9 + 6 } = 15
g(2,{4}) = min { c24 + g( 4, { 4 } – { 4 } ) }
= min { c24 + g( 4, Φ ) }
= min { 10 + 8 } = 18
g(3,{2}) = min { c32 + g( 2, { 2 } – { 2 } ) }
= min { c32 + g( 2, Φ ) }
= min { 13 + 5 } = 18
g(3,{4}) = min { c34 + g( 4, { 4} – { 4 } ) }
= min { c34 + g( 4, Φ ) }
= min { 12 + 8 } = 20
g(4,{2}) = min { c42 + g( 2, { 2 } – { 2 } ) }
= min { c42 + g( 2, Φ ) }
= min { 8 + 5 } = 13
g(4,{3}) = min { c43 + g( 3, { 3 } – { 3 } ) }
= min { c43 + g( 3, Φ ) }
= min { 9 + 6 } = 15
Step 3 : Consider set of 2 elements, such that S

=2, { 2,3} { 2,4 }, { 3,4}

g(i, S) = min { Ci j + g( j, S – { j }) }

g(2,{3,4}) = min { c23 + g( 3,{3,4} – { 3 }), c24 + g( 4,{3,4}- { 4 }) }


= min { c23 + g ( 3, { 4 } ), c24 + g ( 4 , { 3 } ) }
= min { 9 + 20, 10 + 15 }
= min { 29 , 25 }
= 25

g(3,{ 2,4}) = min { c32 + g( 2,{2,4} – { 2 }), c34 + g( 4,{2,4}- { 4 }) }


= min { c32 + g( 2, { 4 } ), c34 + g( 4, { 2 } ) }
= min { 13 + 18, 12 + 13 }
= min { 31,25 }
= 25

g(4,{2,3}) = min { c42 + g( 2,{2,3} – { 2 }), c43 + g( 3,{2,3}- { 3 }) }


=min { c42 + g( 2, { 3 } ), c43 + g( 3, { 2 } ) }
= min { 8 + 15 , 9 + 18 }
= min { 23,27 }
= 23

Step 4: Consider set of 3 elements, such that S =

3, { 2,3,4 }

g(i, S) = min { Ci j + g( j, S – { j }) }

g(1, { 2,3,4 } ) = min { c12 + g ( 2, { 2,3,4 } – { 2 } ), c13 + g ( 3, { 2,3,4 } – { 3 } ), c14 + g ( 4, { 2,3,4 } – { 4 } ) }


= min { c12 + g ( 2, { 3,4 } ), c13 + g ( 3, { 2,4 } ), c14 + g ( 4, { 2,3 })}
= min { 10 + 25, 15 + 25, 20 + 23 }
= min { 35, 40, 43 }
= 35
The optimal Solution is : c12 + g ( 2, { 3,4 } ),
c12 + c24 + g ( 4 , { 3 } ),
c12 + c24 + c43 + g( 3, Φ ),
c12 + c24 + c43 + c3 1

The optimal Cost: 1-2-4-3-1 =10 + 10 + 9 + 6 = 35


0 /1 Knapsack Problem: If we are given n objects and a knapsack or a bag in which the object i that has
weight wi is to be placed. The knapsack has a capacity W. Then the profit that can be earned is pixi . The
objective is to obtain filling of knapsack with maximum profit earned. Maximized pixi. Subject to constraint
wixi<=W Where 1<=i<=n and n is total no. of objects and xi =0 or 1.

0 /1 Knapsack Problem can be computed following recursive method.

1. Initially s0 = {(0,0)} ( P,W)


2. Merging Operation
Si + 1 = Si + Si1
3. Purging rule ( OR ) dominance rule
If Si + 1 contains (Pj , Wj) and (Pk, Wk) these two pairs such that Pj<=Pk and Wj>=Wk, then (Pj
,Wj) can be eliminated . This purging rule is also called as dominance rule. In purging rule basically the
dominated tuples gets purged. In short remove the pair with less profit and more weight.
Example 1 : Consider the following 0 / 1 Knapsack problem using dynamic programming m = 6, n = 3, (
w1, w2, w3 ) = ( 2, 3, 4 ), ( p1, p2, p3 ) = ( 1, 2, 5 ).
Solution : we have to build the sequence of decisions S0, S1, S2, S3

Initially S0 = { ( 0,0 ) }

S01 = Select next ( p1,w1) pair and add it with S0


= ( 1,2 ) + { ( 0,0 ) }
= { ( 1+0), ( 2 + 0 ) } = { ( 1,2 ) }
Si+1= Si ฀ Si1
S1 = S0 + S01
= { ( 0,0 ) } ฀{ ( 1,2 ) }
= { ( 0,0 ), ( 1,2 ) }
To apply Purging Rule : There will be no deleted
S11 = Select next (P2,W2) pair and add it with S1
= ( 2,3 ) + { ( 0,0 ), ( 1,2 ) }
= { ( 2 + 0, 3 + 0 ), ( 2 + 1, 3 + 2 ) }
= { ( 2,3 ) , ( 3,5 ) }
S2 = S1 ฀S11
= { ( 0,0 ), ( 1,2 ) } ฀{ ( 2,3 ) , ( 3,5 ) }
= { ( 0,0 ) , (1,2), (2,3),(3,5) }
To apply Purging Rule : There will be no deleted
S21 = Select next (P3,W3) pair and add it with S2
= ( 5,4 ) + { ( 0,0 ) , (1,2), (2,3),(3,5) }
= { ( 5+0,4+0),(5+1,4+2),(5+2,4+3),(5+3,4+5)}
= { ( 5,4 ),(6,6), (7,7), (8,9) }
S = S2 ฀S21
3

= { ( 0,0 ) , (1,2), (2,3),(3,5) } ฀{ ( 5,4 ),(6,6), (7,7), (8,9) }


= { (0,0),(1,2),(2,3),(3,5),(5,4),(6,6),(7,7),(8,9) }

To apply Purging Rule


( 3,5) and ( 5,4 )
(pj,wj) and ( pk,wk)
Pj<=pk and wj>=wk
3<=5 and 5>=4 -- True, ( pj,wj) pair can be deleted ( 3,5)
3
S = { (0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9) }
Here Capacity of the Knapsack is 6, Now we have to remove the pairs, in which wi>m, i.e wi>6
S3 = { (0,0),(1,2),(2,3),(5,4),(6,6) }
Here M = 6, we will find the tuple, denoting the weight ‘ 6 ‘
i,e ( 6,6 ) belongs to S3

( 6,6 ) does not belongs to S2 Therefore ,

We must set x3 = 1

The pair ( 6,6 ) came from the pair ( 6-p3,6-w3)


( 6-5, 6-4) = ( 1,2 )
2
Here ( 1,2 ) belongs to S (

1,2 ) belongs to S1

Therefore , We must set x2 = 0

( 1,2 ) does not belongs to S0

Therefore , We must set x1 = 1

Hence an Optimal Solution is ( x1 , x2, x3 ) = ( 1, 0 , 1 )

Maximum Profit = p1x1+p2x2+p3x3


= 1*1+2.0+5.1
= 1+0+5=6
Maximum Weight = w1x1+w2x2+w3x3
= 2*1+3.0+4.1
= 2+0+4=6
Example 2 : Consider the following 0 / 1 Knapsack problem using dynamic programming m = 8, n = 4.

Solution : we have to build the sequence of decisions S0, S1, S2, S3, S4

Initially S0 = { ( 0,0 ) }

S01 = Select next ( p1,w1) pair and add it with S0


= ( 1,2 ) + { ( 0,0 ) }
= { ( 1+0), ( 2 + 0 ) } = { ( 1,2 ) }
S = Si ฀ Si1 S1
i+1

= S0 ฀S01
= { ( 0,0 ) } ฀{ ( 1,2 ) }
= { ( 0,0 ), ( 1,2 ) }
To apply Purging Rule : There will be no deleted
S11 = Select next (P2,W2) pair and add it with S1
= ( 2,3 ) + { ( 0,0 ), ( 1,2 ) }
= { ( 2 + 0, 3 + 0 ), ( 2 + 1, 3 + 2 ) }
= { ( 2,3 ) , ( 3,5 ) }
S = S1 ฀ S11
2

= { ( 0,0 ), ( 1,2 ) } ฀{ ( 2,3 ) , ( 3,5 ) }


= { ( 0,0 ) , (1,2), (2,3),(3,5) }
To apply Purging Rule : There will be no deleted
S21 = Select next (P3,W3) pair and add it with S2
= ( 5,4 ) + { ( 0,0 ) , (1,2), (2,3),(3,5) }
= { ( 5+0,4+0),(5+1,4+2),(5+2,4+3),(5+3,4+5)}
= { ( 5,4 ),(6,6), (7,7), (8,9) }
S3 = S2 ฀ S21
= { ( 0,0 ) , (1,2), (2,3),(3,5) } ฀{ ( 5,4 ),(6,6), (7,7), (8,9) }
= { (0,0),(1,2),(2,3),(3,5),(5,4),(6,6),(7,7),(8,9) }
To apply Purging Rule
( 3,5) and ( 5,4 )
(pj,wj) and ( pk,wk)

Pj<=pk and wj>=wk


3<=5 and 5>=4 -- True, ( pj,wj) pair can be deleted ( 3,5)

S3 = { (0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9) }

S31 = Select next (P4,W4) pair and add it with S3


= ( 6,5 ) + { (0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9) }
= { ( 6+0,5+0), (6+1,5+2),(6+2,5+3),(6+5,5+4),(6+6,5+6), ( 6+7 , 5+7),(6+8 , 5+9) }
= { ( 6,5 ),( 7,7) , (8,8), (11,9), (12,11), ( 13,12 ), ( 14,14) }
S4 = S3 ฀S31
= { (0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9) } ฀{ ( 6,5 ),( 7,7) , (8,8), (11,9), (12,11), ( 13,12 ), ( 14,14) }
= { (0,0),(1,2),(2,3),(5,4), ( 6,5 ) ,(6,6),(7,7), (8,8), (8,9), (11,9), (12,11), ( 13,12 ), ( 14,14) }

To apply Purging Rule : There will be no deleted

Here Capacity of the Knapsack is 8, Now we have to remove the pairs, in which wi>m, i.e wi>8

Therefore S4 = { (0,0),(1,2),(2,3),(5,4), ( 6,5 ) ,(6,6),(7,7), (8,8) }

Here M = 8, we will find the tuple, denoting the weight ‘ 8 ‘

i,e ( 8,8 ) belongs to S4


( 8,8 ) does not belongs to S3 Therefore ,

We must set x4 = 1

The pair ( 8,8 ) came from the pair ( 8-p4,8-w4)


( 8-6, 8-5) = ( 2,3 )

Here ( 2,3 ) belongs to S3 (

2,3 ) belongs to S2
Therefore , We must set x3 = 0 (
2,3 ) belongs to S2
( 2,3 ) does not belongs to S1

Therefore , We must set x2 = 1


The pair ( 2,3 ) came from the pair ( 2-p2,3-w2)
( 2-2, 3-3 ) = ( 0,0 )
1
Here ( 0,0 ) belongs to S (
0,0 ) belongs to S0
Therefore , We must set x1 = 0

Hence an Optimal Solution is ( x1 , x2, x3, x4 ) = ( 0, 1 , 0, 1 )

Maximum Profit = p1x1+p2x2+p3x3+p4x4


= 1.0+2.1+5.0 + 6.1
= 0+2+0+6=8
Maximum Weight = w1x1+w2x2+w3x3+w4x4
= 2.0+3.1+4.0+5.1
= 0+3+0+5=8

FLOW SHOP SCHEDULING

Often the processing of a job requires the performance of several distinct tasks.
Computer programs run in a multiprogramming environment are in- put and then executed.
Following the execution, the job is queued for output and the output eventually printed.
In a general flow shop we may have n jobs each requiring m tasks T₁i, T2i,..., Tmi, 1 ≤ i ≤n, to be performed.
Task Tji is to be performed on processor Pj, 1 ≤ j ≤m.
The time required to complete task Tji is tji.
A schedule for the n jobs is an assignment of tasks to time intervals on the processors.
Task Tj must be assigned to processor Pj. No processor may have more than one task assigned to it in any time interval.
Additionally, for any job i the processing of task Tj
i, j > 1, cannot be started until task Tj-1, has been completed.
Two jobs have to be scheduled on three processors. The task times are given by the matrix J
NONPREEMPTIVE SCHEDULE
A nonpreemptive schedule is a schedule in which the processing of a task on any processor is not terminated until the task is
complete.

PREEMPTIVE
A schedule for which this need not be true is called preemptive.

The finish time fi(S) of job i is the time at which all tasks of job i have been completed in schedule S.

In Figure 5.22(a), f₁(S) = 10 and f2(S) 12.

In Figure 5.22(b), f₁(S) = 11 and f₂(S) = 5.

The finish time F(S) of a schedule S is given by

The mean flow time MFT(S) is defined to be

An optimal finish time (OFT) schedule for a given set of jobs is a non- preemptive schedule S for which F(S) is minimum
over all nonpreemptive schedules S.
A preemptive optimal finish time (POFT) schedule, optimal mean finish time schedule (OMFT), and preemptive optimal
mean finish (POMFT) schedule are defined in the obvious way.

Although the general problem of obtaining OFT and POFT schedules for m> 2 and of obtaining OMFT schedules is
computationally difficult dynamic programming leads to an efficient algorithm to obtain OFT schedules for the case m 2. In
this section we consider this special case.

schedule for jobs T1, T2,..., Tk. For this schedule let f₁ and f₂ be the
times at which the processing of jobs T1, T2,..., Tk is completed on processors P₁ and p2 respectively.

You might also like