Daa 3
Daa 3
Dept.of of
Computer Science
Computer & Engineering.
Science &
Engineering.
UNIT NO: 3
Greedy Techniques and Dynamic Programming
PL
D
Dept. of Computer Science &
Engineering.
Greedy strategy
▪ Among all the algorithmic approaches, the simplest and
straightforward approach is the Greedy method.
▪In other words, a greedy algorithm chooses the best possible option at each step,
without considering the consequences of that choice on future steps.
▪The greedy algorithm is useful when a problem can be divided into smaller sub
problems, and the solution of each sub problem can be combined to solve the
overall problem.
▪A classic example of a problem that can be solved using a greedy algorithm is the
“coin change” problem.
▪The problem is to make change for a given amount of money using the least
possible number of coins.
Dept. of Computer Science &
Engineering.
.
▪ A greedy problem statement follows two properties mentioned below:
1)Greedy Choice Property: Choosing the best option at each phase can lead
to a global (overall) optimal solution.
5
Dept. of Computer Science &
Engineering.
PLD
6
Dept. of Computer Science &
Engineering.
Solution steps :
7
Dept. of Computer Science &
Engineering.
▪ The problem is broken down into smaller subproblems that are solved
independently, but many of these subproblems are identical or
overlapping. 8
Dept. of Computer Science &
Engineering.
10
Dept. of Computer Science &
Engineering.
▪The selection of some things, each with profit and weight values, to be packed
into one or more knapsacks with capacity is the fundamental idea behind all
families of knapsack problems.
11
Dept. of Computer Science &
Engineering.
▪The fractional knapsack problem is also one of the techniques which are
used to solve the knapsack problem.
▪In fractional knapsack, the items are broken in order to maximize the
profit. PLD
Knapsack Algorithm
▪The weights (Wi) and profit values (Pi) of the items to be added in the knapsack are
taken as an input for the fractional knapsack algorithm and the subset of the items
added in the knapsack without exceeding the limit and with maximum profit is
achieved as the output.
Algorithm
PLD
▪Consider all the items with their weights and profits mentioned respectively.
▪Calculate Pi/Wi of all the items and sort the items in descending order based on their
Pi/Wi values.
▪Without exceeding the limit, add the items into the knapsack.
▪If the knapsack can still store some weight, but the weights of other items exceed
the limit, the fractional part of the next time can be added.
Example
For the given set of items and the knapsack capacity of 10 kg, find the subset of the
items to be added in the knapsack such that the profit is maximum.
Items 1 2 3 4 5
Weights 3 3 2 5 1
(in kg) PLD
Profits 10 15 10 12 8
Solution
Step 1
Given, n = 5
Wi= {3, 3, 2, 5, 1}
Pi={ 10, 15, 10, 12, 8}
14
Calculate Pi/Wi for all the items
Items 1 2 3 4 5
Weights 3 3 2 5 1
(in kg)
Profits 10 15 10 20 8
Pi/Wi 3.3 5 5 4 8
Step 2
Arrange all the items in descending order based on P i/Wi
Items 5 2 3 4 1
Weights 1 3 2 5 3
(in kg)
Profits 8 15 10 20 10
Pi/Wi 8 5 5 4 3.3
Dept. of Computer Science &
Engineering.
Step 3
▪Without exceeding the knapsack capacity, insert the items in the knapsack with
maximum profit.
▪Knapsack= {5,2,3}
▪However, the knapsack can still hold 4 kg weight, but the next item having 5 kg weight
will exceed the capacity.
▪Therefore, only 4 kg weight of the 5 kg will be added in the knapsack.
PLD
Items 5 2 3 4 1
Weights 1 3 2 5 3
(in kg)
Profits 8 15 10 20 10
Knapsack 1 1 1 4/5 0
Hence, the knapsack holds the weights = [(1 * 1) + (1 * 3) + (1 * 2) + (4/5 * 5)] = 10,
with maximum profit of [(1 * 8) + (1 * 15) + (1 * 10) + (4/5 * 20)] = 37.
▪The 0/1 knapsack problem means that the items are either completely or no items are
filled in a knapsack.
▪Unlike in fractional knapsack, the items are always stored fully without using the
fractional part of them. PLD
▪That is why, this method is known as the 0-1 Knapsack problem or Binary Knapsack.
▪For example, we have two items having weights 2kg and 3kg, respectively.
▪If we pick the 2kg item then we cannot pick 1kg item from the 2kg item (item is not
divisible)
▪ we have to pick the 2kg item completely.
18
Dept. of Computer Science &
Engineering.
Problem: Consider the following instance for the simple knapsack problem. Find
the solution using the greedy method.
N=8
P = {11, 21, 31, 33, 43, 53, 55, 65}
W = {1, 11, 21, 23, 33, 43, 45, 55}
M = 110
Solution:
Let us arrange items by decreasing order of profit density.
Assume that items are labeled as (I1, I2, I3, I4, I5, I6, I7, I8), have
profit P = {11, 21, 31, 33, 43, 53, 55, 65} and
weight W = {1, 11, 21, 23, 33, 43, 45, 55}.
Dept. of Computer Science &
Engineering.
▪We shall select one by one item from the above table.
▪We check the feasibility of item, if the inclusion of an item does not cross the
knapsack capacity, then add it.
▪Otherwise, skip the current item and process the next.
▪We should stop when knapsack is full or all items are scanned
20
Dept. of Computer Science &
Engineering.
Iteration 2 : PLD
Weight = (Weight + w2) = 1 + 11 = 12
Weight ≤ M, so select I2
S = { I1, I2 }, Weight = 12, P = 11 + 21 = 32
Iteration 3 :
Weight = (Weight + w3) = 12 + 21= 33
Weight ≤ M, so select I3
S = { I1, I2, I3 }, Weight = 33, P = 32 + 31 = 63
21
Dept. of Computer Science &
Engineering.
Iteration 4 :
Weight = (Weight + w4) = 33 + 23 = 56
Weight ≤ M, so select I4
S = { I1, I2, I3, I4 }, Weight = 56, P = 63 + 33 = 96
Iteration 5 :
Weight = (Weight + w5) = 56 + 33 = 89
PLD
Weight ≤ M, so select I5
S = { I1, I2, I3, I4, I5 }, Weight = 89, P = 96 + 43 = 139
Iteration 6 :
Weight = (Weight + w6) = 89 + 43 = 132
Weight > M, so reject I6
S = { I1, I2, I3, I4, I5 }, Weight = 89, P = 139
Iteration 7 :
Weight = (Weight + w7) = 89 + 45= 134
Weight > W, so reject I7 22
S = { I I I I I }, Weight = 89, P = 139
Dept. of Computer Science &
Engineering.
23
Dept. of Computer Science &
Engineering.
i.e. at every step, we can make a choice that looks best at the moment to get
the optimal solution of the complete problem.
24
Dept. of Computer Science &
Engineering.
25
Dept. of Computer Science &
Engineering.
Example
PLD
26
Dept. of Computer Science &
Engineering.
PLD
27
Dept. of Computer Science &
Engineering.
In this example, we take the start and finish time of activities as follows:
start = [1, 3, 2, 0, 5, 8, 11]
finish = [3, 4, 5, 7, 9, 10, 12]
28
Dept. of Computer Science &
Engineering.
Dynamic Programming
▪Dynamic programming approach is similar to divide and conquer in breaking down the
problem into smaller and yet smaller possible sub-problems.
▪But unlike divide and conquer, these sub-problems are not solved independently.
PLD
▪Rather, results of these smaller sub-problems are remembered and used for similar or
overlapping sub-problems.
▪Here, optimization problems mean that when we are trying to find out the
minimum or the maximum solution of a problem.
The following are the steps that the dynamic programming follows:
▪It stores the results of subproblems (memoization). The process of storing the results of
PLD
subproblems is known as memorization.
▪It reuses them so that same sub-problem is calculated more than once.
30
Dept. of Computer Science &
Engineering.
PLD
31
Dept. of Computer Science &
Engineering.
1)Top-down approach
The top-down approach follows the memorization technique, while bottom-up approach
follows the tabulation method.
Here memorization is equal to the sum of recursion and caching.
Recursion means calling the function itself, while caching means storing the intermediate
results.
Advantages PLD
It is very easy to understand and implement.
It solves the subproblems only when it is required.
It is easy to debug.
Disadvantages
It uses the recursion technique that occupies more memory in the call stack. Sometimes
when the recursion is too deep, the stack overflow condition will occur.
It occupies more memory that degrades the overall performance.
32
Dept. of Computer Science &
Engineering.
2)Bottom-Up approach
▪The bottom-up approach is also one of the techniques which can be used to
implement the dynamic programming.
▪It uses the tabulation technique to implement the dynamic programming approach.
PLD
▪It solves the same kind of problems but it removes the recursion.
▪If we remove the recursion, there is no stack overflow issue and no overhead of the
recursive functions.
▪In this tabulation technique, we solve the problems and store the results in a matrix.
33
Dept. of Computer Science &
Engineering.
b
Algorithms: 0/1 Knapsack Problem using Dynamic Programming
end
34
Dept. of Computer Science &
Engineering.
In the following recursion tree, K() refers to knapSack(). The two parameters indicated in the
following recursion tree are n and W.
The recursion tree is for following sample inputs.
weight[] = {1, 1, 1}, W = 2, profit[] = {10, 20, 30}
K(3, 2) PLD
/ \
/ \
K(2, 2) K(2, 1)
/ \ / \
/ \ / \
K(1, 2) K(1, 1) K(1, 1) K(1, 0)
/ \ / \ / \
/ \ / \ / \
K(0, 2) K(0, 1) K(0, 1) K(0, 0) K(0, 1) K(0, 0)
Recursion tree for Knapsack capacity 2 units and 3 items of 1 unit weight.
35
Dept. of Computer Science &
Engineering.
Example :-
A thief is robbing a store and can carry a maximal weight of W into his knapsack.
There are n items and weight of ith item is wi and the profit of selecting this item
is pi. What items should the thief take?
The set of items to take can be deduced from the table, starting at c[n, w] and
tracing backwards where the optimal values came from.
36
Dept. of Computer Science &
Engineering.
Let us consider that the capacity of the knapsack is W = 8 and the items are as
shown in the following table
Item A B C D
Profit 2 4 7 10
Weight 1 3 5 7
PLD
37
Dept. of Computer Science &
Engineering.
Step 1
Construct an adjacency table with maximum weight of knapsack as rows and
items with respective weights and profits as columns.
Values to be stored in the table are cumulative profits of the items whose
weights do not exceed the maximum weight of the knapsack (designated values
of each row)
.
PLD
38
Dept. of Computer Science &
Engineering.
2)The remaining values are filled with the maximum profit achievable with respect to
the items and weight per column that can be stored in the knapsack.
c[i,w]=max{c[i−1,w−w[i]]+P[i]}
PLD
39
Dept. of Computer Science &
Engineering.
To find the items to be added in the knapsack, recognize the maximum profit from
the table and identify the items that make up the profit, in this example, its {1, 7}.
PLD
41
Dept. of Computer Science &
Engineering.
▪Algorithm :-
1)Travelling salesman problem takes a graph G {V, E} as an input and declare another
graph as the output (say G’) which will record the path the salesman is going to take
from one node to another.
2)The algorithm begins by sorting all the edges in the input graph G from the least
distance to the largest distance.
PLD
3)The first edge selected is the edge with least distance, and one of the two vertices
(say A and B) being the origin node (say A).
4)Then among the adjacent edges of the node other than the origin node (B), find the
least cost edge and add it onto the output graph.
5)Continue the process with further nodes making sure there are no cycles in the
output graph and the path reaches back to the origin node A.
6)However, if the origin is mentioned in the given problem, then the solution must
always start from that node only. 42
Dept. of Computer Science &
Engineering.
Example−
Let’s assume S is the subset of cities and belongs to {1, 2, 3, …, n} where 1, 2, 3…n are
the cities and i, j are two cities in that subset.
Now cost(i, S, j) is defined in such a way as the length of the shortest path visiting
node in S, which is exactly once having the starting and ending point as i and j
respectively. PLD
The dynamic programming algorithm would be:
Set cost(i, , i) = 0, which means we start and end at i, and the cost is 0.
cost(i, S, j)=min cost (i, S−{i}, j)+dist(i,j) where i∈S and i≠j
43
Dept. of Computer Science &
Engineering.
PLD
dist(i,j) 1 2 3 4
1 0 10 15 20
2 10 0 35 25
3 15 35 0 30
4 20 25 30 0
44
Dept. of Computer Science &
Engineering.
Step 2)
S is the subset of cities. According to our algorithm, for all |S| > 1, we will set the
distance cost(i, S, 1) = ∝. PLD
Here cost(i, S, j) means we are starting at city i, visiting the cities of S once, and now
we are at city j.
We set this path cost as infinity because we do not know the distance yet.
So the values will be the following:
Cost (2, {3, 4}, 1) = ∝ ; the notation denotes we are starting at city 2, going through
cities 3, 4, and reaching 1. And the path cost is infinity.
45
Dept. of Computer Science &
Engineering.
Similarly-
cost(3, {2, 4}, 1) = ∝
cost(4, {2, 3}, 1) = ∝
the optimal path cost would be= cost(1, {other cities}, 1).
Warshall’s Algorithm
▪The Floyd Warshall Algorithm is for solving all pairs of shortest-path problems.
▪It is an algorithm for finding the shortest path between all the pairs of vertices in
a weighted graph.
47
Dept. of Computer Science &
Engineering.
PLD
48
Dept. of Computer Science &
Engineering.
Follow the steps below to find the shortest path between all the pairs of vertices.
49
Dept. of Computer Science &
Engineering.
7)The elements in the first column and the first row are left as they are.
9) Let k be the intermediate vertex in the shortest path from source to destination.
PLD
10)In this step k is the first vertex.
12) In this step, k is vertex 1. We calculate the distance from source vertex to
destination vertex through this vertex k.
50
Dept. of Computer Science &
Engineering.
In this step k is is the second vertex (i.e. vertex 2). The remaining steps
are the same as mentioned above.
51
Dept. of Computer Science &
Engineering.
Similarly, A3 and
A4 is also created.
.
PLD
1)Time Complexity
There are three loops. Each loop has constant complexities. So, the time complexity of the
Floyd-Warshall algorithm is O(n3 )
53
Dept. of Computer Science &
Engineering.
Assignment Questions
2) Write an algorithm for Knapsack greedy problem. Find an optimal solution for
following knapsack problem: n=4, M=70, w= {10, 20, 30, 40}, P = {20, 30, 40, 50}
3) State and explain the principle of dynamic programming. Name the elements of
dynamic programming PLD
4) Differentiate between dynamic programming and Greedy method.
5) Apply Warshall’s Algorithm to find the transitive closure of the digraph defined
by the following adjacency matrix.
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
54
Dept. of Computer Science &
Engineering.
10) Explain Floyd’s Algorithm for all pair shortest path algorithm with example and
analyze its efficiency.
55
THANK YOU