Dept.
Dept.of of
Computer Science
Computer & Engineering.
Science &
Engineering.
UNIT NO: 3
Greedy Techniques and Dynamic Programming
PL
D
Dept. of Computer Science &
Engineering.
UNIT 3 Contents ->
• Greedy strategy: Introduction, Principle
• Algorithms: greedy knapsack problem
• scheduling algorithm- activity selection problem
Dept. of Computer Science &
Engineering.
Greedy strategy
▪ Among all the algorithmic approaches, the simplest and
straightforward approach is the Greedy method.
▪We encounter various types of computational problems each of
which requires a different technique for solving.
▪A problem that requires a maximum or minimum result is the
optimization problem.
▪There are broadly 3 ways to solve optimization problems:
1. The greedy method
2. Dynamic programming
3. Branch and bound technique
Dept. of Computer Science &
Engineering.
Greedy strategy
▪A greedy algorithm is an algorithmic paradigm that follows the problem-solving
heuristic of making the locally optimal choice at each stage with the hope of
finding a global optimum.
▪In other words, a greedy algorithm chooses the best possible option at each step,
without considering the consequences of that choice on future steps.
▪The greedy algorithm is useful when a problem can be divided into smaller sub
problems, and the solution of each sub problem can be combined to solve the
overall problem.
▪A classic example of a problem that can be solved using a greedy algorithm is the
“coin change” problem.
▪The problem is to make change for a given amount of money using the least
possible number of coins.
Dept. of Computer Science &
Engineering.
.
▪ A greedy problem statement follows two properties mentioned below:
1)Greedy Choice Property: Choosing the best option at each phase can lead
to a global (overall) optimal solution.
2)Optimal Substructure: If an optimal solution
PLD to the complete problem
contains the optimal solutions to the sub problems, the problem has an
optimal substructure.
5
Dept. of Computer Science &
Engineering.
Example of Greedy Algorithm
Problem Statement: Find the best route to reach the destination
city from the given starting point using a greedy method.
PLD
6
Dept. of Computer Science &
Engineering.
Solution steps :
▪Start from the source vertex.
▪Pick one vertex at a time with a minimum edge weight (distance) from the
source vertex.
▪Add the selected vertex to a tree structure if the connecting edge does not form
a cycle.
▪Keep adding adjacent fringe vertices toPLD
the tree until you reach the destination
vertex.
The animation given below explains how paths will be picked up in order to reach
the destination city.
7
Dept. of Computer Science &
Engineering.
Applications of Greedy Algorithms:
▪Finding an optimal solution (Activity selection, Fractional Knapsack, Job
Sequencing, Huffman Coding).
▪Finding close to the optimal solution for
PLDNP-Hard problems like TSP.
▪Greedy algorithm is used to select the jobs that will be completed
before their respective deadlines and maximizes the profit.
▪Greedy algorithms are used to cluster data points together based on
certain criteria, such as distance or similarity.
▪ The problem is broken down into smaller subproblems that are solved
independently, but many of these subproblems are identical or
overlapping. 8
Dept. of Computer Science &
Engineering.
Advantages of the Greedy Approach:
▪The greedy approach is easy to implement.
▪Typically have less time complexity.
▪Greedy algorithms can be used for optimization purposes or
PLD
finding close to optimization in case of Hard problems.
▪The greedy approach can be very efficient, as it does not
require exploring all possible solutions to the problem.
▪The solutions to subproblems can be stored in a table, which
can be reused for similar problems.
9
Dept. of Computer Science &
Engineering.
Disadvantages of the Greedy Approach:
▪The local optimal solution may not always be globally optimal.
▪Lack of proof of optimality.
▪The greedy approach is only applicable
PLD to problems that have the
property of greedy-choice property meaning not all problems can be
solved using this approach.
▪The greedy approach is not easily adaptable to changing problem
conditions.
10
Dept. of Computer Science &
Engineering.
Greedy Knapsack Problem
▪The selection of some things, each with profit and weight values, to be packed
into one or more knapsacks with capacity is the fundamental idea behind all
families of knapsack problems.
▪The knapsack problem had two versions that are as follows:
PLD
Binary or 0/1 knapsack : Item cannot be broken down into parts.
Fractional knapsack : Item can be divided into parts.
11
Dept. of Computer Science &
Engineering.
Fractional Knapsack Problem
▪The fractional knapsack problem is also one of the techniques which are
used to solve the knapsack problem.
▪In fractional knapsack, the items are broken in order to maximize the
profit. PLD
▪The problem in which we break the item is known as a Fractional knapsack
▪To explain this problem a little easier, consider a test with 12 questions, 10
problem.
marks each, out of which only 10 should be attempted to get the maximum
mark of 100.
▪The test taker now must calculate the highest profitable questions – the one
that he’s confident in – to achieve the maximum mark.
▪However, he cannot attempt all the 12 questions since there will not be any
extra marks awarded for those attempted answers.
▪This is the most basic real-world application of the knapsack problem. 12
Dept. of Computer Science &
Engineering.
Knapsack Algorithm
▪The weights (Wi) and profit values (Pi) of the items to be added in the knapsack are
taken as an input for the fractional knapsack algorithm and the subset of the items
added in the knapsack without exceeding the limit and with maximum profit is
achieved as the output.
Algorithm
PLD
▪Consider all the items with their weights and profits mentioned respectively.
▪Calculate Pi/Wi of all the items and sort the items in descending order based on their
Pi/Wi values.
▪Without exceeding the limit, add the items into the knapsack.
▪If the knapsack can still store some weight, but the weights of other items exceed
the limit, the fractional part of the next time can be added.
▪Hence, giving it the name fractional knapsack problem.
13
Dept. of Computer Science &
Engineering.
Example
For the given set of items and the knapsack capacity of 10 kg, find the subset of the
items to be added in the knapsack such that the profit is maximum.
Items 1 2 3 4 5
Weights 3 3 2 5 1
(in kg) PLD
Profits 10 15 10 12 8
Solution
Step 1
Given, n = 5
Wi= {3, 3, 2, 5, 1}
Pi={ 10, 15, 10, 12, 8}
14
Calculate Pi/Wi for all the items
Items 1 2 3 4 5
Weights 3 3 2 5 1
(in kg)
Profits 10 15 10 20 8
Pi/Wi 3.3 5 5 4 8
Step 2
Arrange all the items in descending order based on P i/Wi
Items 5 2 3 4 1
Weights 1 3 2 5 3
(in kg)
Profits 8 15 10 20 10
Pi/Wi 8 5 5 4 3.3
Dept. of Computer Science &
Engineering.
Step 3
▪Without exceeding the knapsack capacity, insert the items in the knapsack with
maximum profit.
▪Knapsack= {5,2,3}
▪However, the knapsack can still hold 4 kg weight, but the next item having 5 kg weight
will exceed the capacity.
▪Therefore, only 4 kg weight of the 5 kg will be added in the knapsack.
PLD
Items 5 2 3 4 1
Weights 1 3 2 5 3
(in kg)
Profits 8 15 10 20 10
Knapsack 1 1 1 4/5 0
Hence, the knapsack holds the weights = [(1 * 1) + (1 * 3) + (1 * 2) + (4/5 * 5)] = 10,
with maximum profit of [(1 * 8) + (1 * 15) + (1 * 10) + (4/5 * 20)] = 37.
▪Note:- https://www.youtube.com/watch?v=mMhC9vuA-70 (Video Lecture link for fractionl
16
knapsack)
Dept. of Computer Science &
Engineering.
Algorithm GREEDY_FRACTIONAL_KNAPSACK(X, V, W, M)
// Description : Solve the knapsack problem using greedy approach
// Input: X: An array of n items
▪ V: An array of profit associated with each item
▪W: An array of weight associated with each item
▪M: Capacity of knapsack
▪ Output : SW: Weight of selected items SP: Profit of selected items
▪ // Items are presorted in decreasing order of pi = vi / wi ratio
▪S ← Φ // Set of selected items, PLD
▪initially empty SW ← 0 // weight of selected items
▪SP ← 0 // profit of selected items
▪ i ← 1 while i ≤ n
▪do if (SW + w[i]) ≤ M then
▪ S ← S ∪ X[i] SW ← SW + W[i] SP ← SP + V[i]
else
▪frac ← (M - SW) / W[i] S ← S ∪ X[i] * frac // Add fraction of item
▪X[i] SP ← SP + V[i] * frac // Add fraction of profit
▪SW ← SW + W[i] * frac // Add fraction of weight
End
i ← i + 1 end 17
Dept. of Computer Science &
Engineering.
0-1 Knapsack Problem
▪The 0/1 knapsack problem means that the items are either completely or no items are
filled in a knapsack.
▪Unlike in fractional knapsack, the items are always stored fully without using the
fractional part of them. PLD
▪ Its either the item is added to the knapsack or not.
▪That is why, this method is known as the 0-1 Knapsack problem or Binary Knapsack.
▪For example, we have two items having weights 2kg and 3kg, respectively.
▪If we pick the 2kg item then we cannot pick 1kg item from the 2kg item (item is not
divisible)
▪ we have to pick the 2kg item completely.
18
Dept. of Computer Science &
Engineering.
Examples for Binary Knapsack
Problem: Consider the following instance for the simple knapsack problem. Find
the solution using the greedy method.
N=8
P = {11, 21, 31, 33, 43, 53, 55, 65}
W = {1, 11, 21, 23, 33, 43, 45, 55}
M = 110
Solution:
Let us arrange items by decreasing order of profit density.
Assume that items are labeled as (I1, I2, I3, I4, I5, I6, I7, I8), have
profit P = {11, 21, 31, 33, 43, 53, 55, 65} and
weight W = {1, 11, 21, 23, 33, 43, 45, 55}.
Dept. of Computer Science &
Engineering.
Item Weight Value pi = v i / w i
I1 1 11 11.0
I2 11 21 1.91
I3 21 31 1.48
I4 23 33 1.44
I5 33 43 1.30
I6 43 PLD
53 1.23
I7 45 55 1.22
I8 55 65 1.18
▪We shall select one by one item from the above table.
▪We check the feasibility of item, if the inclusion of an item does not cross the
knapsack capacity, then add it.
▪Otherwise, skip the current item and process the next.
▪We should stop when knapsack is full or all items are scanned
20
Dept. of Computer Science &
Engineering.
Initialize, Weight = 0, P = 0, knapsack capacity M = 110, Solution set S = { }.
Iteration 1 :
Weight = (Weight + w1) = 0 + 1 = 1
Weight ≤ M, so select I1
S = { I1 }, Weight = 1, P = 0 + 11 = 11
Iteration 2 : PLD
Weight = (Weight + w2) = 1 + 11 = 12
Weight ≤ M, so select I2
S = { I1, I2 }, Weight = 12, P = 11 + 21 = 32
Iteration 3 :
Weight = (Weight + w3) = 12 + 21= 33
Weight ≤ M, so select I3
S = { I1, I2, I3 }, Weight = 33, P = 32 + 31 = 63
21
Dept. of Computer Science &
Engineering.
Iteration 4 :
Weight = (Weight + w4) = 33 + 23 = 56
Weight ≤ M, so select I4
S = { I1, I2, I3, I4 }, Weight = 56, P = 63 + 33 = 96
Iteration 5 :
Weight = (Weight + w5) = 56 + 33 = 89
PLD
Weight ≤ M, so select I5
S = { I1, I2, I3, I4, I5 }, Weight = 89, P = 96 + 43 = 139
Iteration 6 :
Weight = (Weight + w6) = 89 + 43 = 132
Weight > M, so reject I6
S = { I1, I2, I3, I4, I5 }, Weight = 89, P = 139
Iteration 7 :
Weight = (Weight + w7) = 89 + 45= 134
Weight > W, so reject I7 22
S = { I I I I I }, Weight = 89, P = 139
Dept. of Computer Science &
Engineering.
Iteration 8 : Weight = (Weight + w8) = 89 + 55= 144
Weight > M, so reject I8
S = { I1, I2, I3, I4, I5 }, Weight = 89, P = 139
All items are tested.
The greedy algorithm selects items { I1, I2, I3, I4, I5 }, and
gives a profit of 139 units PLD
23
Dept. of Computer Science &
Engineering.
Scheduling Algorithm- Activity Selection Problem
Activity Selection problem is a approach of selecting non-conflicting tasks
based on start and end time and can be solved in O(N logN) time using a simple
greedy approach.
The problem statement for Activity Selection is that "Given a set of n
activities with their start and finish times,PLD
we need to select maximum
number of non-conflicting activities that can be performed by a single
person, given that the person can handle only one activity at a time.
" The Activity Selection problem follows Greedy approach.
i.e. at every step, we can make a choice that looks best at the moment to get
the optimal solution of the complete problem.
24
Dept. of Computer Science &
Engineering.
▪The Activity selection problem can be solved using Greedy Approach.
▪Our task is to maximize the number of non-conflicting activities.
▪Two activities A1 and A2 are said to be non-conflicting
if S1 >= F2 or S2 >= F1,
PLD
▪where S and F denote the start and end time respectively.
25
Dept. of Computer Science &
Engineering.
Example
PLD
26
Dept. of Computer Science &
Engineering.
PLD
27
Dept. of Computer Science &
Engineering.
In this example, we take the start and finish time of activities as follows:
start = [1, 3, 2, 0, 5, 8, 11]
finish = [3, 4, 5, 7, 9, 10, 12]
1)Sorted by their finish time, the activity 0 gets selected.
2)As the activity 1 has starting time which is equal to the finish time of activity 0, it
gets selected.
PLD
3)Activities 2 and 3 have smaller starting time than finish time of activity 1, so they get
rejected.
4)Based on similar comparisons, activities 4 and 6 also get selected,
whereas activity 5 gets rejected.
5)In this example, in all the activities 0, 1, 4 and 6 get selected, while others get
rejected.
28
Dept. of Computer Science &
Engineering.
Dynamic Programming
▪Dynamic programming approach is similar to divide and conquer in breaking down the
problem into smaller and yet smaller possible sub-problems.
▪But unlike divide and conquer, these sub-problems are not solved independently.
PLD
▪Rather, results of these smaller sub-problems are remembered and used for similar or
overlapping sub-problems.
▪The main use of dynamic programming is to solve optimization problems.
▪Here, optimization problems mean that when we are trying to find out the
minimum or the maximum solution of a problem.
▪ The dynamic programming guarantees to find the optimal solution of a
problem if the solution exists.
29
Dept. of Computer Science &
Engineering.
The following are the steps that the dynamic programming follows:
▪It breaks down the complex problem into simpler subproblems.
▪It finds the optimal solution to these sub-problems.
▪It stores the results of subproblems (memoization). The process of storing the results of
PLD
subproblems is known as memorization.
▪It reuses them so that same sub-problem is calculated more than once.
▪Finally, calculate the result of the complex problem.
30
Dept. of Computer Science &
Engineering.
Approaches of Dynamic Programming :
There are two approaches to dynamic programming:
Top-down approach
Bottom-up approach
PLD
31
Dept. of Computer Science &
Engineering.
1)Top-down approach
The top-down approach follows the memorization technique, while bottom-up approach
follows the tabulation method.
Here memorization is equal to the sum of recursion and caching.
Recursion means calling the function itself, while caching means storing the intermediate
results.
Advantages PLD
It is very easy to understand and implement.
It solves the subproblems only when it is required.
It is easy to debug.
Disadvantages
It uses the recursion technique that occupies more memory in the call stack. Sometimes
when the recursion is too deep, the stack overflow condition will occur.
It occupies more memory that degrades the overall performance.
32
Dept. of Computer Science &
Engineering.
2)Bottom-Up approach
▪The bottom-up approach is also one of the techniques which can be used to
implement the dynamic programming.
▪It uses the tabulation technique to implement the dynamic programming approach.
PLD
▪It solves the same kind of problems but it removes the recursion.
▪If we remove the recursion, there is no stack overflow issue and no overhead of the
recursive functions.
▪In this tabulation technique, we solve the problems and store the results in a matrix.
33
Dept. of Computer Science &
Engineering.
b
Algorithms: 0/1 Knapsack Problem using Dynamic Programming
Algorithm DP_BINARY_KNAPSACK (V, W, M) // Description: Solve binary
knapsack problem using dynamic programming
// Input: Set of items X, set of weight W, profit of items V and knapsack capacity
M
// Output: Array V, which holds the solution of problem
PLD
for i ← 1 to n do V[i, 0] ← 0 end
for i ← 1 to M do V[0, i] ← 0 end for V[0, i] ← 0
do for j ← 0 to M do if w[i] ≤ j then V[i, j] ← max{V[i-1, j], v[i] + V[i – 1, j
w[i]]} else V[i, j] ← V[i – 1, j] // w[i]>j end
end
34
Dept. of Computer Science &
Engineering.
Dynamic Programming Approach for 0/1 Knapsack Problem
Memoization Approach for 0/1 Knapsack Problem using Dynamic Programming:
In the following recursion tree, K() refers to knapSack(). The two parameters indicated in the
following recursion tree are n and W.
The recursion tree is for following sample inputs.
weight[] = {1, 1, 1}, W = 2, profit[] = {10, 20, 30}
K(3, 2) PLD
/ \
/ \
K(2, 2) K(2, 1)
/ \ / \
/ \ / \
K(1, 2) K(1, 1) K(1, 1) K(1, 0)
/ \ / \ / \
/ \ / \ / \
K(0, 2) K(0, 1) K(0, 1) K(0, 0) K(0, 1) K(0, 0)
Recursion tree for Knapsack capacity 2 units and 3 items of 1 unit weight.
35
Dept. of Computer Science &
Engineering.
Example :-
A thief is robbing a store and can carry a maximal weight of W into his knapsack.
There are n items and weight of ith item is wi and the profit of selecting this item
is pi. What items should the thief take?
▪The algorithm takes the following inputs
1)The maximum weight W PLD
2)The number of items n
3)The two sequences v = <v1, v2, …, vn> and w = <w1, w2, …, wn>
The set of items to take can be deduced from the table, starting at c[n, w] and
tracing backwards where the optimal values came from.
36
Dept. of Computer Science &
Engineering.
Let us consider that the capacity of the knapsack is W = 8 and the items are as
shown in the following table
Item A B C D
Profit 2 4 7 10
Weight 1 3 5 7
PLD
37
Dept. of Computer Science &
Engineering.
Step 1
Construct an adjacency table with maximum weight of knapsack as rows and
items with respective weights and profits as columns.
Values to be stored in the table are cumulative profits of the items whose
weights do not exceed the maximum weight of the knapsack (designated values
of each row)
.
PLD
38
Dept. of Computer Science &
Engineering.
2)The remaining values are filled with the maximum profit achievable with respect to
the items and weight per column that can be stored in the knapsack.
The formula to store the profit values is −
c[i,w]=max{c[i−1,w−w[i]]+P[i]}
PLD
39
Dept. of Computer Science &
Engineering.
To find the items to be added in the knapsack, recognize the maximum profit from
the table and identify the items that make up the profit, in this example, its {1, 7}.
PLD
The optimal solution is {1, 7} with the maximum profit is 12.
40
Dept. of Computer Science &
Engineering.
▪The Traveling Salespersons Problem
The travelling salesman problem is a graph computational problem where the
salesman needs to visit all cities (represented using nodes in a graph) in a list just
once and the distances (represented using edges in the graph) between all these
cities are known.
PLD
The solution that is needed to be found for this problem is the shortest possible
route in which the salesman visits all the cities and returns to the origin city.
41
Dept. of Computer Science &
Engineering.
▪Algorithm :-
1)Travelling salesman problem takes a graph G {V, E} as an input and declare another
graph as the output (say G’) which will record the path the salesman is going to take
from one node to another.
2)The algorithm begins by sorting all the edges in the input graph G from the least
distance to the largest distance.
PLD
3)The first edge selected is the edge with least distance, and one of the two vertices
(say A and B) being the origin node (say A).
4)Then among the adjacent edges of the node other than the origin node (B), find the
least cost edge and add it onto the output graph.
5)Continue the process with further nodes making sure there are no cycles in the
output graph and the path reaches back to the origin node A.
6)However, if the origin is mentioned in the given problem, then the solution must
always start from that node only. 42
Dept. of Computer Science &
Engineering.
Example−
Let’s assume S is the subset of cities and belongs to {1, 2, 3, …, n} where 1, 2, 3…n are
the cities and i, j are two cities in that subset.
Now cost(i, S, j) is defined in such a way as the length of the shortest path visiting
node in S, which is exactly once having the starting and ending point as i and j
respectively. PLD
The dynamic programming algorithm would be:
Set cost(i, , i) = 0, which means we start and end at i, and the cost is 0.
When |S| > 1, we define cost(i, S, 1) = ∝ where i !=1 .
Because initially, we do not know the exact cost to reach city i to city 1 through other
cities.
Now, we need to start at 1 and complete the tour.
We need to select the next city in such a way-
cost(i, S, j)=min cost (i, S−{i}, j)+dist(i,j) where i∈S and i≠j
43
Dept. of Computer Science &
Engineering.
PLD
dist(i,j) 1 2 3 4
1 0 10 15 20
2 10 0 35 25
3 15 35 0 30
4 20 25 30 0
44
Dept. of Computer Science &
Engineering.
Let’s see how our algorithm works:
Step 1)
We are considering our journey starting at city 1, visit other cities once and return to
city 1.
Step 2)
S is the subset of cities. According to our algorithm, for all |S| > 1, we will set the
distance cost(i, S, 1) = ∝. PLD
Here cost(i, S, j) means we are starting at city i, visiting the cities of S once, and now
we are at city j.
We set this path cost as infinity because we do not know the distance yet.
So the values will be the following:
Cost (2, {3, 4}, 1) = ∝ ; the notation denotes we are starting at city 2, going through
cities 3, 4, and reaching 1. And the path cost is infinity.
45
Dept. of Computer Science &
Engineering.
Similarly-
cost(3, {2, 4}, 1) = ∝
cost(4, {2, 3}, 1) = ∝
Step 3) Now, for all subsets of S, we need to find the following:
cost(i, S, j)=min cost (i, S−{i}, j)+dist(i,j), where j∈S and i≠j
PLD
That means the minimum cost path for starting at i, going through the subset
of cities once, and returning to city j.
Considering that the journey starts at city 1,
the optimal path cost would be= cost(1, {other cities}, 1).
Internet of Thnigs by Amol Dande 46
Dept. of Computer Science &
Engineering.
Warshall’s Algorithm
▪The Floyd Warshall Algorithm is for solving all pairs of shortest-path problems.
▪It is an algorithm for finding the shortest path between all the pairs of vertices in
a weighted graph.
▪This algorithm follows the dynamic programming
PLD approach to find the shortest
path.
▪Floyd-Warhshall algorithm is also called as Floyd's algorithm, Roy-Floyd algorithm,
Roy-Warshall algorithm, or WFI algorithm.
47
Dept. of Computer Science &
Engineering.
How Floyd-Warshall Algorithm Works?
Let the given graph be:
PLD
48
Dept. of Computer Science &
Engineering.
Follow the steps below to find the shortest path between all the pairs of vertices.
1) Create a matrix A^0 of dimension n*n where n is the number of vertices.
2) The row and the column are indexed as I & j respectively.
3) I & j are the vertices of the graph.
4) Each cell A[i] [j] is filled with the distance from the ith vertex to the jth vertex.
5) If there is no path from the ith vertex to the jth vertex , the cell is left as infinity.
PLD
49
Dept. of Computer Science &
Engineering.
6)Now, create a matrix A^1 using matrix A^0.
7)The elements in the first column and the first row are left as they are.
8)The remaining cells are filled in the following way.
9) Let k be the intermediate vertex in the shortest path from source to destination.
PLD
10)In this step k is the first vertex.
11) A[i][j] is filled with
(A[i][k] + A[k][j]) if (A[i][j] > A[i][k] + A[k][j])
That is, if the direct distance from the source to the destination is greater than the path
through the vertex k then the cell is filled with A[i][k] + A[k][j].
12) In this step, k is vertex 1. We calculate the distance from source vertex to
destination vertex through this vertex k.
50
Dept. of Computer Science &
Engineering.
PLD in the second column and
Similarly, A2 is created using A1 . The elements
the second row are left as they are.
In this step k is is the second vertex (i.e. vertex 2). The remaining steps
are the same as mentioned above.
51
Dept. of Computer Science &
Engineering.
Similarly, A3 and
A4 is also created.
.
PLD
A4 gives the shortest path between each pair of vertices. 52
Dept. of Computer Science &
Engineering.
▪Floyd Warshall Algorithm Complexity
1)Time Complexity
There are three loops. Each loop has constant complexities. So, the time complexity of the
Floyd-Warshall algorithm is O(n3 )
2)Space Complexity PLD
The space complexity of the Floyd-Warshall algorithm is O(n2)
▪Floyd Warshall Algorithm Applications
1)To find the shortest path is a directed graph
2) To find the transitive closure of directed graphs
3)To find the Inversion of real matrices.
53
Dept. of Computer Science &
Engineering.
Assignment Questions
1). Write an algorithm for Knapsack problem using Greedy strategy.
2) Write an algorithm for Knapsack greedy problem. Find an optimal solution for
following knapsack problem: n=4, M=70, w= {10, 20, 30, 40}, P = {20, 30, 40, 50}
3) State and explain the principle of dynamic programming. Name the elements of
dynamic programming PLD
4) Differentiate between dynamic programming and Greedy method.
5) Apply Warshall’s Algorithm to find the transitive closure of the digraph defined
by the following adjacency matrix.
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
54
Dept. of Computer Science &
Engineering.
6)Write the algorithm to compute 0/1 Knapsack problem using dynamic
programming and explain it.
7) Explain the Travelling sales man problem .
8) Explain dynamic programming strategy? Enlist few applications which can be
solved by using dynamic programming.
PLD
9) Explain activity selection problem in detail with example.
10) Explain Floyd’s Algorithm for all pair shortest path algorithm with example and
analyze its efficiency.
55
THANK YOU