UNIT 3idjlska
UNIT 3idjlska
Module 3
DYNAMIC PROGRAMMING
General method, Matrix-chain multiplication, All pairs shortest path, Optimal binary search trees, 0/1
Knapsack problem, Traveling salesperson problem, Flow shop scheduling.
DYNAMIC PROGRAMMING
Dynamic Programming is a method to solve the given problem by taking sequence of decisions. In order
to get the optimal solution of the given problem. We should write the possible decisions by using principal of
optimality; we will select the optimal solution of the given problem.
The difference between greedy method and dynamic programming is, In greedy method we take
only one decision to find out the solution. Whereas in dynamic programming we take sequence of decisions,
which satisfy the condition and finally we get the optimal solution by using principal of optimality.
Dynamic programming is a technique that breaks the problems into sub-problems, and saves the result for future
purposes so that we do not need to compute the result again. The subproblems are optimized to optimize the overall
solution is known as optimal substructure property.
The main use of dynamic programming is to solve optimization problems. Here, optimization problems mean that
when we are trying to find out the minimum or the maximum solution of a problem. The dynamic programming
guarantees to find the optimal solution of a problem if the solution exists.
The definition of dynamic programming says that it is a technique for solving a complex problem by first breaking
into a collection of simpler subproblems, solving each subproblem just once, and then storing their solutions to
avoid repetitive computations.
General method
• Dynamic Programming is an algorithm design method used when solution to a particular problem can be
viewed as a result of a sequence of decisions.
• Dynamic Programming Drastically reduces the amount of enumeration by eliminating those sequences
which cannot be optimal.
• In Dynamic programming optimal sequence of decisions is found by following the principle of optimality.
Principal of optimality:
An optimal sequence of decisions the property that whatever the initial state and decision are the
remaining decisions must constitute an optimal decision sequence with regard to the state resulting
from the first decision.
Consider an example of the Fibonacci series. The following series is the Fibonacci series:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ,…
The numbers in the above series are not randomly calculated. Mathematically, we could write each of the terms
using the below formula:
F(n) = F(n-1) + F(n-2),
With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the above relationship.
For example, F(2) is the sum f(0) and f(1), which is equal to 1.
How can we calculate F(20)?
The F(20) term will be calculated using the nth formula of the Fibonacci series. The below figure shows that how
F(20) is calculated.
The dynamic programming is applicable that are having properties such as:
Those problems that are having overlapping subproblems and optimal substructures.
Here, optimal substructure means that the solution of optimization problems can be
obtained by simply combining the optimal solution of all the subproblems.
In the case of dynamic programming, the space complexity would be increased as we are
storing the intermediate results, but the time complexity would be decreased.
In the above code, we have used the recursive approach to find out the Fibonacci series.
When the value of 'n' increases, the function calls will also increase, and computations will also increase. In this
case, the time complexity increases exponentially, and it becomes 2n.
One solution to this problem is to use the dynamic programming approach. Rather than generating the recursive
tree again and again, we can reuse the previously calculated value. If we use the dynamic programming
approach, then the time complexity would be O(n).
When we apply the dynamic programming approach in the implementation of the Fibonacci
series, then the code would look like:
static int count = 0;
int fib(int n)
{
if(memo[n]!= NULL)
return memo[n];
count++;
if(n<0)
error;
if(n==0)
return 0;
if(n==1)
return 1;
sum = fib(n-1) + fib(n-2);
memo[n] = sum;
}
}
In the above code, we have used the memorization technique in which we store the results in an array
to reuse the values. This is also known as a top-down approach in which we move from the top and
break the problem into sub-problems.
Bottom-Up approach
The bottom-up approach is also one of the techniques which can be used to implement the dynamic
programming. It uses the tabulation technique to implement the dynamic programming approach. It solves the
same kind of problems but it removes the recursion. If we remove the recursion, there is no stack overflow issue
and no overhead of the recursive
functions. In this tabulation technique, we solve the problems and store the results in a matrix. We use iterative
method used in bottom up approach.
The bottom-up is the approach used to avoid the recursion, thus saving the memory space. The bottom-up is an
algorithm that starts from the beginning, whereas the recursive algorithm starts from the end and works
backward. In the bottom-up approach, we start from the base case to find the answer for the end. As we know,
the base cases in the Fibonacci series are 0 and 1. Since the bottom approach starts from the base cases, so we
will start from 0 and 1.
Key points
• We solve all the smaller sub-problems that will be needed to solve the larger sub
problems then move to the larger problems using smaller sub-problems.
• We use for loop to iterate over the sub-problems.
• The bottom-up approach is also known as the tabulation or table filling method.
int fib(int n)
{
int A[];
A[0] = 0, A[1] = 1;
for( i=2; i<=n; i++)
{
A[i] = A[i-1] + A[i-2]
}
return A[n];
}
To keep track of optimal subsolutions, we store the value of k in a table s[i, j]. Recall, kistheplace at which we
split the product Ai..j to get an optimal parenthesization.
That is, s[i, j] = k such that m[i, j] = m[i, k] + m[k + 1, j] + pi − 1 . pk . pj
Step 4:
OPTIMAL BINARY SEARCH TREE ( OBST ) : The given set of identifiers { a1, a2,…., an} with
a1<a2<a3…an. Let p(i) be the probability with which we can search for ai. Let q(i) be the probability that the
identifier x being searched. such that ai < x < ai + 1 and 0 ≤ i ≤ n. In other words p(i) is the probability of successful
search and q(i) be the probability of unsuccessful search.
Clearly ∑ p(i) + ∑ q(i) then obtain a tree with minimum cost. Such a tree with optimum Cost is
1≤i≤n 1≤i≤n
called optimal binary search tree.
To solve this problem using dynamic programming method by using following formulas.
1 : c ( i, j ) = min { c( i , k - 1 ) + c( k , j ) + w ( i , j ) }
i<k≤j
2 : w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
3 : r( i , j ) = k
Example1 : Using algorithm OBST compute w(i,j), r(i,j) and c(i,j), 0 ≤ i ≤ j ≤ 4 for the identifier set ( a1,
a2, a3, a4 ) = ( do, while, for, if ) with ( p1, p2, p3, p4 ) = ( 3, 3, 1, 1 ) and ( q0, q1, q2,
q3, q4 ) = ( 2, 3, 1, 1, 1 ) using r ( i , j ) construct the optimal binary search tree.
Solution :
Successful Probability : ( p1, p2, p3, p4 ) = ( 3, 3, 1, 1 )
UnSuccessful Probability : ( q0, q1, q2, q3, q4 ) = ( 2, 3, 1, 1, 1 )
identifier set : ( a1, a2, a3, a4 ) = (do, while, for, if) Initial
Conditions :
w(i,j)= q(i)
c(i,j) = 0
r(i,j) = 0
Formulas :
1. w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
2. c ( i, j ) = min { c( i , k - 1 ) + c( k , j ) + w ( i , j ) }
i<k≤j
3. r( i , j ) = k
Step1 : j – i = 0
w ( i , j ) = q(i)
w(0,0) = q(0)= 2 c(0,0) = 0 r(0,0) = 0
w(1,1) = q(1)= 3 c(1,1) = 0 r(1,1) = 0
w(2,2) = q(2)= 1 c(2,2) = 0 r(2,2) = 0
w(3,3) = q(3)= 1 c(3,3) = 0 r(3,3) = 0
w(4,4) = q(4)= 1 c(4,4) = 0 r(4,4) = 0
Step 2 : j – i = 1 , ( i =0, j = 1, k = 1 )
w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
w ( 0, 1 ) = p(1) + q(1) + w(0,0)
=3 + 3 + 2 = 8
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 0,1) = min { c(0,0)+c(1,1)} + w(0,1)
= min { 0 +0 } + 8
= 0+8 = 8
r(0,1) = 1
( i =1, j = 2, k = 2 )
w ( 1, 2 ) = p(2) + q(2) + w(1,1)
=3+1+3=7
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c(1,2) = min { c(1,1)+c(2,2)} + w(1,2)
= min { 0 + 0 } + 7= 7
r(1,2) = 2
( i =2, j = 3, k = 3 )
w ( 2, 3 ) = p ( 3 ) + q(3) + w(2,2)
=1+1+1=3
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 2,3) = min { c(2,2) + c(3,3)} + w(2,3)
= min { 0 + 0 } + 3 = 3
r(2,3) = 3
3
( i =3, j = 4, k = 4 )
w ( 3, 4 ) = p(4)+q(4)+w(3,3)
=1+1+1=3
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 3,4) = min { c(3,3) + c( 4,4) } + w(3,4)
= min { 0 + 0} + 3 = 3
r(3,4) = 4
Step 3 : j – i = 2 , ( i =0, j= 2, k= 1,2 )
( i =1, j= 3, k=2,3 )
for a set( i, j ) j w ( 0, 0 ) = 2 w ( 1, 1 ) = 3 w ( 2, 2 ) = 1 w ( 3, 3 ) = 1 w ( 4, 4 ) = 1
c ( 0, 0 ) = 0 c (1, 1) = 0 c (2, 2) = 0 c (3, 3) = 0 c (4, 4) = 0
r ( 0, 0 ) = 0 r (1, 1 ) = 0 r (2, 2 ) = 0 r (3, 3 ) = 0 r (4, 4 ) = 0
–i=0
w ( 0, 1 ) = 8 w ( 1, 2 ) = 7 w ( 2, 3 ) = 3 w ( 3, 4 ) = 3
w ( 0, 2 ) = 12 w (1, 3) = 9 w (2, 4) = 5
r ( 0, 2 ) = 1 r (1, 3 ) = 2 r (2, 4 ) = 3
w ( 0, 3 ) = 14 w (1, 4 ) = 11
j–i=3 c ( 0, 3 ) = 25 c (1, 4 ) = 19
r ( 0, 3 ) = 2 r (1, 4 ) = 2
w ( 0, 4 ) = 16
j–i=4 c ( 0, 4 ) = 32
r ( 0, 4 ) = 2
To build OBST, r ( 0,4) = 2, K =2.
Hence a2 becomes root node.
r(i,j) = k
r(i,k-1) r(k,j)
r(0,4)=2
r(i,k-1) r(k,j)
r(0,1)=1 r(2,4)= 3
r(i,k-1) r(k,j)
r(3,3)=0 r(4,4) = 0
( a1, a2, a3, a4 ) = ( do, while, for, if )
while
do for
if
Solution :
Successful Probability P(1)
= 1/20 * 20 = 1 P(2)
= 1/5 * 20 =4
P(3) = 1/10 * 20 = 2
P(4) = 1/20 * 20 = 1
( p1,p2,p3,p4 ) = ( 1, 4, 2, 1 )
UnSuccessful Probability
q(0)=1/5 * 20 = 4
q(1)=1/10 * 20 = 2
q(2)=1/5 * 20 = 4
q(3)=1/20 * 20=1
q(4)=1/20* 20=1
(q0,q1,q2,q3,q4) = ( 4,2,4,1,1 )
(a1,a2,a3,a4)=(end, goto, print, stop)
Initial Conditions :
w(i,j)= q(i)
c(i,j) = 0
r(i,j) = 0
Formulas :
1. w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
2. c ( i, j ) = min { c( i , k - 1 ) + c( k , j ) + w ( i , j ) }
i<k≤j
3. r( i , j ) = k
Step1 : j – i = 0
w ( i , j ) = q(i)
w(0,0) = q(0)= 4 c(0,0) = 0 r(0,0) = 0
w(1,1) = q(1)= 2 c(1,1) = 0 r(1,1) = 0
w(2,2) = q(2)= 4 c(2,2) = 0 r(2,2) = 0
w(3,3) = q(3)= 1 c(3,3) = 0 r(3,3) = 0
w(4,4) = q(4)= 1 c(4,4) = 0 r(4,4) = 0
Step 2 : j – i = 1 , ( i =0, j = 1, k = 1 )
w( i , j ) = p(j) + q(j) + w ( i , j - 1 )
w ( 0, 1 ) = p(1) + q(1) + w(0,0)
=1 + 2 + 4 = 7
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 0,1) = min { c(0,0)+c(1,1)} + w(0,1)
= min { 0 +0 } + 7
= 0+7 = 7
r(0,1) = 1
( i =1, j = 2, k = 2 )
w ( 1, 2 ) = p(2)+q(2)+w(1,1)
= 4 + 4 + 2 = 10
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c(1,2) = min { c(1,1)+c(2,2)} + w(1,2)
= min { 0 + 0 } + 10= 10
r(1,2) = 2
( i =2, j = 3, k = 3 )
w ( 2, 3 ) = p ( 3 ) + q(3) + w(2,2)
=2+1+4=7
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 2,3) = min { c(2,2) + c(3,3)} + w(2,3)
= min { 0 + 0 } + 7 = 7
r(2,3) = 3
( i =3, j = 4, k = 4 )
w ( 3, 4 ) = p(4)+q(4)+w(3,3)
=1+1+1=3
c ( i, j ) = min { c( i , k - 1 ) + c( k , j )} + w ( i , j )
i<k≤j
c( 3,4) = min { c(3,3) + c( 4,4) } + w(3,4)
= min { 0 + 0} + 3 = 3
r(3,4) = 4
Step 3 : j – i = 2 , ( i =0, j= 2, k= 1,2 )
w ( 0, 0 ) = 4 w ( 1, 1 ) = 2 w ( 2, 2 ) = 4 w ( 3, 3 ) = 1 w ( 4, 4 ) = 1
for a set( i, j ) j
c ( 0, 0 ) = 0 c (1, 1) = 0 c (2, 2) = 0 c (3, 3) = 0 c (4, 4) = 0
–i=0
r ( 0, 0 ) = 0 r (1, 1 ) = 0 r (2, 2 ) = 0 r (3, 3 ) = 0 r (4, 4 ) = 0
w ( 0, 1 ) = 7 w ( 1, 2 )= 10 w ( 2, 3 ) = 7 w ( 3, 4 ) = 3
r ( 0, 1 ) = 1 r (1, 2 ) = 2 r (2, 3 ) = 3 r ( 3, 4 ) = 4
w ( 0, 2 ) = 15 w (1, 3) = 13 w (2, 4) = 9
r ( 0, 2 ) = 2 r (1, 3 ) = 2 r (2, 4 ) = 3
w ( 0, 3 ) = 18 w (1, 4 ) = 15
j–i=3 c ( 0, 3 ) = 21 c (1, 4 ) = 27
r ( 0, 3 ) = 2 r (1, 4 ) = 2
w ( 0, 4 ) = 20
j–i=4 c ( 0, 4 ) = 39
r ( 0, 4 ) = 2
To build OBST, r ( 0,4) = 2, K =2.
Hence a2 becomes root node.
r(i,j) = k
r(i,k-1) r(k,j)
r(0,4)=2
r(i,k-1) r(k,j)
r(0,1)=1 r(2,4)= 3
r(i,k-1) r(k,j)
r(3,3)=0 r(4,4) = 0
( a1, a2, a3, a4 )=( end, goto, print, stop )
goto
end print
Stop
Algorithm Find(c, r, i, j)
{
min = ∞
for m = r[i, j-1] to r[i+1, j] do
if ( c[i,m-1] + c[m,j] < min then
{
min = c[i, m-1] + c[m, j];
l=m;
}
return l;
}
1 2
1
4 3
2
3 4
6
Sol :
5
1 2
1
4 3
2
3 4
6
The formula for finding the all pairs shortest path problem as follows,
Ak ( i, j ) = min { Ak – 1 ( i, j ), Ak – 1 ( i, k ) + Ak – 1 ( k, j ) }
Where if k ≥ 1.
Ak , number of iterations to be taken .
These iterations depends on the number of weights of vertices given, in our problem, we have 5 vertices. we will
take 4 iterations and we will always start from A0 is the given matrix. So we have to find A1 , A2 , A3 A4 for the
shortest path. According to the formula.
K= 1
A1 ( 1, 1 ) = min { A1 – 1 ( 1, 1 ), A1– 1 ( 1, 1 ) + A1 – 1 ( 1, 1 ) }
= min { 0, 0 + 0 } = 0
A ( 1, 2 ) = min { A1 – 1 ( 1, 2 ), A1– 1 ( 1, 1 ) + A1 – 1 ( 1, 2 ) }
1
= min { 5, 0 + 5 } = 5
A ( 1, 3 ) = min { A1 – 1 ( 1, 3 ), A1– 1 ( 1, 1 ) + A1 – 1 ( 1, 3 ) }
1
= min { 4, 0 + 4 } = 4
A ( 1, 4 ) = min { A1 – 1 ( 1, 4 ), A1– 1 ( 1, 1 ) + A1 – 1 ( 1, 4 ) }
1
= min { 1, 0 + 1 } = 1
A ( 2, 1 ) = min { A1 – 1 ( 2, 1 ), A1– 1 ( 2, 1 ) + A1 – 1 ( 1, 1 ) }
1
= min { 5, 5 + 0 } = 5
A1 ( 2, 2 ) = min { A1 – 1 ( 2, 2 ), A1– 1 (2, 1 ) + A1 – 1 ( 1, 2 ) }
= min { 0,5 + 5 } = 0
A1 ( 2, 3 ) = min { A1 – 1 ( 2, 3 ), A1– 1 ( 2, 1 ) + A1 – 1 ( 1, 3 ) }
= min { 2, 5 + 4 } = 2
A1 ( 2, 4 ) = min { A1 – 1 ( 2, 4 ), A1– 1 ( 2, 1 ) + A1 – 1 ( 1, 4 ) }
= min { 3, 5 + 1 } = 3
A1 ( 3, 1 ) = min { A1 – 1 ( 3, 1 ), A1– 1 ( 3, 1 ) + A1 – 1 ( 1, 1 ) }
= min { 4, 4 + 0 } = 4
A1 ( 3, 2 ) = min { A1 – 1 ( 3, 2 ), A1– 1 (3, 1 ) + A1 – 1 ( 1, 2 ) }
= min { 2,4 + 5 } = 2
A ( 3, 3 ) = min { A1 – 1 ( 3, 3 ), A1– 1 ( 3, 1 ) + A1 – 1 ( 1, 3 ) }
1
= min { 0, 4 + 4 } = 0
A ( 3, 4 ) = min { A1 – 1 ( 3, 4 ), A1– 1 ( 3, 1 ) + A1 – 1 ( 1, 4 ) }
1
= min { 6, 4 + 1 } = 5
A ( 4, 1 ) = min { A1 – 1 ( 4, 1 ), A1– 1 ( 4, 1 ) + A1 – 1 ( 1, 1 ) }
1
= min { 1, 1 + 0 } = 1
A ( 4, 2 ) = min { A1 – 1 ( 4, 2 ), A1– 1 (4, 1 ) + A1 – 1 ( 1, 2 ) }
1
= min { 3,1 + 5 } = 3
A ( 4, 3 ) = min { A1 – 1 ( 4, 3 ), A1– 1 ( 4, 1 ) + A1 – 1 ( 1, 3 ) }
1
= min { 6, 1 + 4 } = 5
A1 ( 4, 4 ) = min { A1 – 1 ( 4, 4 ), A1– 1 ( 4, 1 ) + A1 – 1 ( 1, 4 ) }
= min { 0, 1 + 1 } = 0
0 5 4 1
5 0 2 3
A1 = 4 2 0 5
1 3 5 0
K= 2
A2 ( 1, 1 ) = min { A2 – 1 ( 1, 1 ), A2– 1 ( 1, 2 ) + A2 – 1 ( 2, 1 ) }
= min { 0, 5 + 5 } = 0
A2 ( 1, 2 ) = min { A2 – 1 ( 1, 2 ), A2– 1 ( 1, 2 ) + A2 – 1 ( 2, 2 ) }
= min { 5, 5 + 0 } = 5
A2 ( 1, 3 ) = min { A2 – 1 ( 1, 3 ), A2– 1 ( 1, 2 ) + A2 – 1 ( 2, 3 ) }
= min { 4, 5 + 2 } = 4
A2 ( 1, 4 ) = min { A2 – 1 ( 1, 4 ), A2– 1 ( 1, 2 ) + A\2 – 1 ( 2, 4 ) }
= min { 1, 5 + 3 } = 1
A2 ( 2, 1 ) = min { A2 – 1 ( 2, 1 ), A2– 1 ( 2, 2 ) + A2 – 1 ( 2, 1 ) }
= min { 5, 0 + 5 } = 5
A2 ( 2, 2 ) = min { A2 – 1 ( 2, 2 ), A2– 1 (2, 2 ) + A2 – 1 ( 2, 2 ) }
= min { 0,0 + 0 } = 0
A2 ( 2, 3 ) = min { A2 – 1 ( 2, 3 ), A2– 1 ( 2, 2 ) + A2 – 1 ( 2, 3 ) }
= min { 2, 0 + 2 } = 2
A ( 2, 4 ) = min { A2 – 1 ( 2, 4 ), A2– 1 ( 2, 2 ) + A2 – 1 ( 2, 4 ) }
2
= min { 3, 0 + 3} = 3
A ( 3, 1 ) = min { A2 – 1 ( 3, 1 ), A2– 1 ( 3, 2 ) + A2 – 1 ( 2, 1 ) }
2
= min { 4, 2 + 5 } = 4
A ( 3, 2 ) = min { A2 – 1 ( 3, 2 ), A2– 1 (3, 2 ) + A2 – 1 ( 2, 2 ) }
2
= min { 2,2 + 0 } = 2
A ( 3, 3 ) = min { A2 – 1 ( 3, 3 ), A2– 1 ( 3, 2 ) + A2 – 1 ( 2, 3 ) }
2
= min { 0, 2 + 2 } = 0
A2 ( 3, 4 ) = min { A2 – 1 ( 3, 4 ), A2– 1 ( 3, 2 ) + A2 – 1 ( 2, 4 ) }
= min { 5, 2 + 3 } = 5
A2 ( 4, 1 ) = min { A2 – 1 ( 4, 1 ), A2– 1 ( 4, 2 ) + A2 – 1 ( 2, 1 ) }
= min { 1, 3 + 5 } = 1
A2 ( 4, 2 ) = min { A2 – 1 ( 4, 2 ), A2– 1 (4, 2 ) + A2 – 1 ( 2, 2 ) }
= min { 3,3 + 0 } = 3
A2 ( 4, 3 ) = min { A2 – 1 ( 4, 3 ), A2– 1 ( 4, 2 ) + A2 – 1 ( 2, 3 ) }
= min { 5, 3 + 2 } = 5
A2 ( 4, 4 ) = min { A2 – 1 ( 4, 4 ), A2– 1 ( 4, 2 ) + A2 – 1 ( 2, 4 ) }
= min { 0, 3 + 3 } = 0
0 5 4 1
5 0 2 3
A2 = 4 2 0 5
1 3 5 0
K= 3
A3 ( 1, 1 ) = min { A3 – 1 ( 1, 1 ), A3– 1 ( 1, 3 ) + A3 – 1 ( 3, 1 ) }
= min { 0, 4 + 4 } = 0
A3 ( 1, 2 ) = min { A3 – 1 ( 1, 2 ), A3– 1 ( 1, 3 ) + A3 – 1 ( 3, 2 ) }
= min { 5, 5 + 0 } = 5
A ( 1, 3 ) = min { A3 – 1 ( 1, 3 ), A3– 1 ( 1, 3 ) + A3 – 1 ( 3, 3 ) }
3
= min { 4, 5 + 2 } = 4
A ( 1, 4 ) = min { A3 – 1 ( 1, 4 ), A3– 1 ( 1, 3 ) + A\3 – 1 ( 3, 4 ) }
3
= min { 1, 4 + 5 } = 1
A ( 2, 1 ) = min { A3 – 1 ( 2, 1 ), A3– 1 ( 2, 3 ) + A3 – 1 ( 3, 1 ) }
3
= min { 5, 4 + 4 } = 5
A ( 2, 2 ) = min { A3 – 1 ( 2, 2 ), A3– 1 (2, 3 ) + A3 – 1 ( 3, 2 ) }
3
= min { 0,2 + 2 } = 0
A ( 2, 3 ) = min { A3 – 1 ( 2, 3 ), A3– 1 ( 2, 3 ) + A3 – 1 ( 3, 3 ) }
3
= min { 2, 2 + 0 } = 2
A3 ( 2, 4 ) = min { A3 – 1 ( 2, 4 ), A3– 1 ( 2, 3 ) + A3 – 1 ( 3, 4 ) }
= min { 3, 2 + 5} = 3
A3 ( 3, 1 ) = min { A3 – 1 ( 3, 1 ), A3– 1 ( 3, 3 ) + A3 – 1 ( 3, 1 ) }
= min { 4, 0 + 4 } = 4
A3 ( 3, 2 ) = min { A3 – 1 ( 3, 2 ), A3– 1 (3, 3 ) + A3 – 1 ( 3, 2 ) }
= min { 2,0 + 2 } = 2
A3 ( 3, 3 ) = min { A3 – 1 ( 3, 3 ), A3– 1 ( 3, 3 ) + A3 – 1 ( 3, 3 ) }
= min { 0, 0 + 0 } = 0
A3 ( 3, 4 ) = min { A3 – 1 ( 3, 4 ), A3– 1 ( 3, 3 ) + A3 – 1 ( 3, 4 ) }
= min { 5, 0 + 5} = 5
A3 ( 4, 1 ) = min { A3 – 1 ( 4, 1 ), A3– 1 ( 4, 3 ) + A3 – 1 ( 3, 1 ) }
= min { 1, 5 + 4 } = 1
A3 ( 4, 2 ) = min { A3 – 1 ( 4, 2 ), A3– 1 (4, 3 ) + A3 – 1 ( 3, 2 ) }
= min { 3,5 + 2 } = 3
A ( 4, 3 ) = min { A3 – 1 ( 4, 3 ), A3– 1 ( 4, 3 ) + A3 – 1 ( 3, 3 ) }
3
= min { 5, 5 + 0 } = 5
A ( 4, 4 ) = min { A3 – 1 ( 4, 4 ), A3– 1 ( 4, 3 ) + A3 – 1 ( 3, 4 ) }
3
= min { 0, 5 + 5} = 0
0 5 4 1
5 0 2 3
A3 = 4 2 0 5
1 3 5 0
K= 4
A4 ( 1, 1 ) = min { A4 – 1 ( 1, 1 ), A4– 1 ( 1, 4 ) + A4 – 1 ( 4, 1 ) }
= min { 0, 1 + 1 } = 0
A ( 1, 2 ) = min { A4 – 1 ( 1, 2 ), A4– 1 ( 1, 4 ) + A4 – 1 ( 4, 2 ) }
4
= min { 5, 1 + 3 } = 4
A ( 1, 3 ) = min { A4 – 1 ( 1, 3 ), A4– 1 ( 1, 4 ) + A4 – 1 ( 4, 3 ) }
4
= min { 4, 1 + 5 } = 4
A ( 1, 4 ) = min { A4 – 1 ( 1, 4 ), A4– 1 ( 1, 4 ) + A\4 – 1 ( 4, 4 ) }
4
= min { 1, 1 + 0 } = 1
A4 ( 2, 1 ) = min { A4 – 1 ( 2, 1 ), A4– 1 ( 2, 4 ) + A4 – 1 ( 4, 1 ) }
= min { 5, 3 + 1 } = 4
A4 ( 2, 2 ) = min { A4 – 1 ( 2, 2 ), A4– 1 ( 2, 4 ) + A4 – 1 ( 4, 2 ) }
= min { 0, 3 + 3 } = 0
A4 ( 2, 3 ) = min { A4 – 1 ( 2, 3 ), A4– 1 ( 2, 4 ) + A4 – 1 ( 4, 3 ) }
= min { 2, 3 + 5 } = 2
A4 ( 2, 4 ) = min { A4 – 1 ( 2, 4 ), A4– 1 ( 2, 4 ) + A\4 – 1 ( 4, 4 ) }
= min { 3, 3 + 0 } = 3
A4 ( 3, 1 ) = min { A4 – 1 ( 3, 1 ), A4– 1 ( 3, 4 ) + A4 – 1 ( 4, 1 ) }
= min { 4, 5 + 1 } = 4
A4 ( 3, 2 ) = min { A4 – 1 ( 3, 2 ), A4– 1 ( 3, 4 ) + A4 – 1 ( 4, 2 ) }
= min { 2, 5 + 3 } = 2
A ( 3, 3 ) = min { A4 – 1 ( 3, 3 ), A4– 1 ( 3, 4 ) + A4 – 1 ( 4, 3 ) }
4
= min { 0, 5 + 5 } = 0
A ( 3, 4 ) = min { A4 – 1 ( 3, 4 ), A4– 1 ( 3, 4 ) + A\4 – 1 ( 4, 4 ) }
4
= min { 5, 5 + 0 } = 5
A ( 4, 1 ) = min { A4 – 1 ( 4, 1 ), A4– 1 ( 4, 4 ) + A4 – 1 ( 4, 1 ) }
4
= min { 1, 0 + 1 } = 1
A ( 4, 2 ) = min { A4 – 1 ( 4, 2 ), A4– 1 ( 4, 4 ) + A4 – 1 ( 4, 2 ) }
4
= min { 3, 0 + 3 } = 3
A4 ( 4, 3 ) = min { A4 – 1 ( 4, 3 ), A4– 1 ( 4, 4 ) + A4 – 1 ( 4, 3 ) }
= min { 5, 0 + 5 } = 5
A4 ( 4, 4 ) = min { A4 – 1 ( 4, 4 ), A4– 1 ( 4, 4 ) + A\4 – 1 ( 4, 4 ) }
= min { 0, 0 + 0 } = 0
0 4 4 1
4 0 2 3
A4 =
4 2 0 5
1 3 5 0
Travelling Salesperson Problems : If there are n cities and cost of traveling from one city to other city is given.
A salesman has to start from any one of the city and has to visit all the cities exactly once and has to return to the
starting place with shortest distance or minimum cost.
Travelling Sales person problem can be computed following recursive method.
1 : g(i, Φ ) = Ci , 1
2 : g(i, S) = min { Cij + g( j, S – { j } }
Here g(i, S) means i is starting node and the nodes in S are to be traversed. min is considered as the
intermediate node g( j , S – { j } ) means j is already traversed. So next we have to traverse S – { j } with j as
starting point.
Example 1 : Construct an optimal travelling sales person tour using Dynamic Programming.
g(i, S) = min { Ci j + g( j, S – { j }) }
g( 2, { 3,4 } ) = min { c23 + g( 3, {3,4 } – {3}) , c24 + g( 4, {3,4 } – {4}) }
= min { c23 + g( 3, {4}), c24 + g(4, {3}) }
= min { 13 + 28, 6 + 6 }
= min { 41, 12 }
= 12.
3, { 2,3,4 }
g(i, S) = min { Ci j + g( j, S – { j }) }
3, { 2,3,4 }
g(i, S) = min { Ci j + g( j, S – { j }) }
Initially S0 = { ( 0,0 ) }
We must set x3 = 1
1,2 ) belongs to S1
Solution : we have to build the sequence of decisions S0, S1, S2, S3, S4
Initially S0 = { ( 0,0 ) }
= S0 S01
= { ( 0,0 ) } { ( 1,2 ) }
= { ( 0,0 ), ( 1,2 ) }
To apply Purging Rule : There will be no deleted
S11 = Select next (P2,W2) pair and add it with S1
= ( 2,3 ) + { ( 0,0 ), ( 1,2 ) }
= { ( 2 + 0, 3 + 0 ), ( 2 + 1, 3 + 2 ) }
= { ( 2,3 ) , ( 3,5 ) }
S = S1 S11
2
S3 = { (0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9) }
Here Capacity of the Knapsack is 8, Now we have to remove the pairs, in which wi>m, i.e wi>8
We must set x4 = 1
2,3 ) belongs to S2
Therefore , We must set x3 = 0 (
2,3 ) belongs to S2
( 2,3 ) does not belongs to S1
Often the processing of a job requires the performance of several distinct tasks.
Computer programs run in a multiprogramming environment are in- put and then executed.
Following the execution, the job is queued for output and the output eventually printed.
In a general flow shop we may have n jobs each requiring m tasks T₁i, T2i,..., Tmi, 1 ≤ i ≤n, to be performed.
Task Tji is to be performed on processor Pj, 1 ≤ j ≤m.
The time required to complete task Tji is tji.
A schedule for the n jobs is an assignment of tasks to time intervals on the processors.
Task Tj must be assigned to processor Pj. No processor may have more than one task assigned to it in any time interval.
Additionally, for any job i the processing of task Tj
i, j > 1, cannot be started until task Tj-1, has been completed.
Two jobs have to be scheduled on three processors. The task times are given by the matrix J
NONPREEMPTIVE SCHEDULE
A nonpreemptive schedule is a schedule in which the processing of a task on any processor is not terminated until the task is
complete.
PREEMPTIVE
A schedule for which this need not be true is called preemptive.
The finish time fi(S) of job i is the time at which all tasks of job i have been completed in schedule S.
An optimal finish time (OFT) schedule for a given set of jobs is a non- preemptive schedule S for which F(S) is minimum
over all nonpreemptive schedules S.
A preemptive optimal finish time (POFT) schedule, optimal mean finish time schedule (OMFT), and preemptive optimal
mean finish (POMFT) schedule are defined in the obvious way.
Although the general problem of obtaining OFT and POFT schedules for m> 2 and of obtaining OMFT schedules is
computationally difficult dynamic programming leads to an efficient algorithm to obtain OFT schedules for the case m 2. In
this section we consider this special case.
schedule for jobs T1, T2,..., Tk. For this schedule let f₁ and f₂ be the
times at which the processing of jobs T1, T2,..., Tk is completed on processors P₁ and p2 respectively.