Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views113 pages

Ada (Bcs401) Module4

The document discusses dynamic programming and the greedy method as techniques for solving optimization problems, highlighting their differences and applications. It includes examples such as the knapsack problem, shortest path problem, and minimum spanning tree, explaining algorithms like Warshall's and Floyd's for finding transitive closures and all-pairs shortest paths. The document also covers efficiency analysis and the implementation of algorithms, emphasizing the importance of choosing appropriate data structures.

Uploaded by

revald.21ai039
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views113 pages

Ada (Bcs401) Module4

The document discusses dynamic programming and the greedy method as techniques for solving optimization problems, highlighting their differences and applications. It includes examples such as the knapsack problem, shortest path problem, and minimum spanning tree, explaining algorithms like Warshall's and Floyd's for finding transitive closures and all-pairs shortest paths. The document also covers efficiency analysis and the implementation of algorithms, emphasizing the importance of choosing appropriate data structures.

Uploaded by

revald.21ai039
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 113

Design and Analysis of

Algorithms BCS401

Module 4: Dynamic Programming &


The Greedy Method
Dynamic Programming
• Dynamic programming is a technique for solving
problems with overlapping subproblems.
– subproblems arise from a recurrence relating a given
problem’s solution to solutions of its smaller
subproblems.
– DP suggests solving each of the smaller
subproblems only once and recording the results in
a table from which a solution to the original
problem can then be obtained.
• The Dynamic programming can be used when the
solution to a problem can be viewed as the result
of sequence of decisions.

2
Example-1: Knapsack Problem

3
• Knapsack problem can be viewed as the result of sequence of
decisions
• We have to decide the value of xi, 1≤i≤n
• First we make decision on x1, then on x2 and so on
• An optimal sequence of decision maximize the objective
function ∑pixi . It also satisfies constraints ∑wixi≤m and 0≤xi≤1
Example-2: File merging problem

Example-3: Shortest Path problem

Shortest Path- One way to find shortest path from vertex I to vertex j in a
directed graph G is to decide which vertex should be the second vertex, which
should be third and so on until vertex j is reached. An optimal sequence of
decisions is one that results in a path of least length

5
• Greedy method-Always follows the predefined method(procedure)
• Ex:prims,kruskal-start with minimum cost edge always

• Dynamic programming-will try to find out all possible solution then


pick up the best solution . But time consuming than greedy method

• Both greedy and dynamic try to find out optimal solution

• Dynamic programming follows recursive method and principal of


optimality

• Principal of optimality: states that an optimal sequence of decisions


has the property that whatever the initial state and decision are, the
remaining decision must constitute an optimal decision sequence
with regard to the state resulting from the first decision
Transitive Closure

Definition: The transitive closure of a directed graph with n


vertices can be defined as the n x n boolean matrix T = {t ij }, in
which the element in the ith row `and the jth column is 1 if
there exists a nontrivial path (i.e., directed path of a positive
length) from the ith vertex to the jth vertex; otherwise, tij is 0.
29
Transitive Closure
• We can generate the transitive closure of a digraph
with the help of depth-first search or breadth-first
search.
• Since this method traverses the same digraph several
times, we can use a better algorithm called
Warshall’s algorithm.
• Warshall’s algorithm constructs the transitive closure
through a series of n × n boolean matrices:

30
Warshalls Algorithm
• The element in the ith row and jth column of matrix R(k) is
equal to 1 if and only if
– there exists a directed path of a positive length from
the ith vertex to the jth vertex with each intermediate
vertex, if any, numbered not higher than k.
R(0) -adjacency matrix
R(1) - contains info about path that can use first vertex
as intermediate
R(2) - contains info about path that can use second
vertex as intermediate
So on

31
or

Which means
1) If element rij is 1 in R(k-1) it remains 1 in R(k)
2) If an element rij is 0 in R(k-1), it has to be changed to 1
in R(k) if and only if the element in its row i and
column k and the element in its column j and row k
are both 1’s in R(k-1)
33
Algorithm

36
Analysis
• Its time efficiency is Θ(n3).
• We can make the algorithm to run faster by treating
matrix rows as bit strings and employ the bitwise
or operation available in most modern computer
languages.
• Space efficiency
– Although separate matrices for recording
intermediate results of the algorithm are used,
that can be avoided.

37
All Pairs Shortest Paths
• Problem definition: Given a weighted connected
graph (undirected or directed), the all-pairs shortest
paths problem asks to find the distances—i.e., the
lengths of the shortest paths - from each vertex to all
other vertices.

• Applications:
– Communications, transportation networks, and operations
research.
– pre-computing distances for motion planning in computer
games.

39
All Pairs Shortest Paths

(a) Digraph. (b) Its weight matrix. (c) Its distance matrix

We can generate the distance matrix


with an algorithm that is very similar to
Warshall’s algorithm.

It is called Floyd’s algorithm.


40
Floyds Algorithm

41
Floyds Algorithm

42
Analysis
• Its time efficiency is Θ(n3),
– similar to the warshall’s
algorithm.

43
44
Knapsack problem using DP
• Given n items of known weights w1, . . . , wn and
values v1, . . . , vn and a knapsack of capacity W, find
the most valuable subset of the items that fit into the
knapsack.
• To design a DP algorithm, we need to derive a
recurrence relation that expresses a solution in terms
of its smaller sub instances.
• Let us consider an instance defined by the first i
items, 1≤ i ≤ n, with weights w1, . . . , wi, values v1, . . .
, vi , and knapsack capacity j, 1 ≤ j ≤ W.

64
Knapsack problem using DP
• Let F(i, j) be the value of an optimal solution to this
instance.
• We can divide all the subsets of the first i items that
fit the knapsack of capacity j into two categories:
– those that do not include the ith item
– and those that do.
• Among the subsets that do not include the ith item,
– the value of an optimal subset is, by definition,
– i.e F(i , j) = F(i − 1, j).

65
Knapsack problem using DP
• Among the subsets that do include the ith item
(hence, j − wi ≥ 0),
– an optimal subset is made up of this item and
– an optimal subset of the first i−1 items that fits
into the knapsack of capacity j − wi .
– The value of such an optimal subset is
F(i , j) = vi + F(i − 1, j − wi)
• Thus, optimal solution among all feasible subsets of
the first I items is the maximum of these two values.

66
Knapsack problem using DP
• It is convenient to define the initial conditions as
follows: F(0, j) = 0 for j ≥ 0 and F(i, 0) = 0 for i ≥
0.

• Our goal is to find F(n, W), the maximal value of a


subset of the n given items that fit into the knapsack
of capacity W, and an optimal subset itself.

67
Example-1
Apply bottom-up dynamic programming algorithm to the
following instance of the knapsack problem Capacity W=
5
What is the composition of the optimal subset?
The composition of the optimal subset if found by tracing back the computations
for the entries in the table.
Example-2

Find the composition of an optimal subset by backtracing the computations


69
Example-2

Find the composition of an optimal subset by backtracing the computations


70
Analysis
• The classic dynamic programming approach, works
bottom up: it fills a table with solutions to all smaller
subproblems, each of them is solved only once.
• Drawback: Some unnecessary subproblems are also
solved

• The time efficiency and space efficiency of this


algorithm are both in Θ(nW).
• The time needed to find the composition of an
optimal solution is in O(n).

72
Discussion
• The direct bottom-up approach to finding a solution to
such a recurrence leads to an algorithm that solves
common subproblems more than once and hence is
very inefficient.
• Since this drawback is not present in the top-down
approach, it is natural to try to combine the strengths
of the top-down and bottom-up approaches.
• The goal is to get a method that solves only subproblems
that are necessary and does so only once. Such a
method exists; it is based on using memory functions.

73
Algorithm MFKnapsack(i, j )
• Implements the memory function method for the
knapsack problem
• Input: A nonnegative integer i indicating the number of the
first items being considered and a nonnegative integer j
indicating the knapsack capacity
• Output: The value of an optimal feasible subset of the first i
items
• Note: Uses as global variables input arrays Weights[1..n], V
alues[1..n], and table F[0..n, 0..W ] whose entries are
initialized with −1’s except for row 0 and column 0 initialized
with 0’s

74
75
Example-1
Apply memory function method to the following
instance of the knapsack problem Capacity W= 5
Example-2

76
Three Basic Examples

EXAMPLE1 Coin-row problem :

• There is a row of n coins whose values are some


positive integers c1,c2,...,cn, not necessarily
distinct.
• The goal is to pick up the maximum amount of
money subject to the constraint that no two coins
adjacent in the initial row can be picked up.
Continue..

To derive a recurrence for F(n), we partition all the allowed coin


selections into two groups: those that include the last coin and
those without it.
• The largest amount we can get from the first group is equal to
cn +F(n−2) —the value of the nth coin plus the maximum
amount we can pick up from the first n−2 coins.
• The maximum amount we can get from the second group is
equal to F(n−1) by the definition of F(n).
Algorithm
EXAMPLE 2 Change-making problem
Give change for amount n using the minimum number of coins of
denominations d1<d2<d3.....<dm.
Let F(n) be the minimum number of coins whose values add up to
n; it is convenient to define F(0) =0.
The amount n can only be obtained by adding one coin of
denomination dj to the amount n−dj for j =1,2,...,m such that n≥dj.
There fore, we can consider all such denominations and select the
one minimizing F(n−dj)+1.Since 1 is a constant, we can, of course,
find the smallest F(n−dj) f irst and then add 1 to it.
Change making Algorithm
Problem
EXAMPLE 3 Coin-collecting problem
• Several coins are placed in cells of an n×m board,no
more than one coin per cell.
• A robot,located in the upper left cell of the board,
needs to collect as many of the coins as possible and
bring them to the bottom right cell.
• On each step, the robot can move either one cell to
the right or one cell down from its current location.
When the robot visits a cell with a coin, it always
picks up that coin.
• Design an algorithm to find the maximum number of
coins the robot can collect and a path it needs to
follow to do this.
Where cij = 1if there is a coin in cell (i, j), and cij = 0
otherwise.
Robot Coin Collection Algorithm
Example1
Example2
Example3
Greedy – General
Method
• A greedy algorithm is an algorithm that always tries to find
best solution for each sub problem with the hopes that
this will yield a good solution for the problem as a whole

58
General method
• Given n inputs choose a subset that satisfies some
constraints.
• A subset that satisfies the constraints is called a
feasible solution.
• We need to find the feasible solution that
maximizes or minimizes a given objective
function
• A feasible solution that maximises or minimises a
given (objective) function is said to be optimal.
• Often it is easy to find a feasible solution but difficult
to find the optimal solution.

59
Greedy – General Method
• The greedy approach suggests
– constructing a solution through a sequence of steps
– each expanding a partially constructed
solution obtained so far,
– until a complete solution to the problem is reached.

• On each step the choice made must be:


– feasible, i.e., it has to satisfy the problem’s
constraints
– locally optimal, i.e., it has to be the best local
choice among all feasible choices available on
that step
– irrevocable, i.e., once made, it cannot be changed
on subsequent steps of the algorithm 60
Greedy Method

61
In greedy method
• Algorithm is applied in stages considering one input at a time

• At each stage decision is made regarding whether a particular input is


an optimal solution

• This is done by considering the inputs in an order determined by


some selection procedure

• If the inclusion of next input into the partially constructed optimal


solution will result in an infeasible solution then this input is not
added to the partial solution .Otherwise it is added

• This version of greedy technique is called subset paradigm


Minimum Spanning Tree -MST
• A spanning tree of a connected graph is its connected
acyclic subgraph (i.e., a tree) that contains all the
vertices of the graph.
• A minimum spanning tree of a weighted connected
graph is its spanning tree of the smallest weight,
where the weight of a tree is defined as the sum of the
weights on all its edges.
• The minimum spanning tree problem is the problem of
finding a minimum spanning tree for a given weighted
connected graph.

63
MST - Example

64
65
Prims Algorithm
• Constructs minimum spanning tree through a sequence of
expanding subtrees

• The initial subtree in such a sequence consists of single


vertex selected arbitrarily from the set V of the graphs
vertices

• On each iteration we expand the current tree in the


greedy manner by simply attaching to it the nearest
vertex not in that tree(nearest vertex=smallest weight)

• The algorithm stops after all the vertices are visited


Prims Algorithm-
Example

67
Example from Text
book

Tree Remaining Illustratio


vertices vertices n

68
Example from Text
book

Tree Remaining Illustratio


vertices vertices n

69
Prim’s Algorithm

71
Analysis of Efficiency
• Depends on the data structures chosen
– for the graph & for the priority queue of the set V − VT
whose vertex priorities are the distances to the nearest
tree vertices.
• If graph representation - weight matrix,
• priority queue - is implemented as an unordered
array,
the running time will be in Θ(|V|2).
• On each of the |V|−1 iterations,
– the array implementing the priority queue is traversed to
find and delete the minimum and then to update, if
necessary, the priorities of the remaining vertices.

72
Analysis of Efficiency
• We can implement the priority queue as a min-heap.
– A min-heap is a complete binary tree in which
every element is less than or equal to its children.
– Deletion of the smallest element from and
insertion of a new element into a min-heap of size
n are O(log n) operations.
• If a graph is represented as adjacency lists and the
priority queue is implemented as a min-heap,
– the running time is in O(|E| log |V|).

73
Why running time is in O(|E| log |V|) ?
• Algorithm performs |V|−1 deletions of the
smallest element and makes |E| verifications
– and, possibly, changes of an element’s priority
in a min-heap of size not exceeding |V|.
– Each of these operations, is a O(log |
V|) operation.
• Hence, the running time of this implementation
of Prim’s algorithm is in
= (|V| − 1+ |E|) O (log |V |)
= O(|E| log |V |) because, in a connected graph, |
V|
− 1≤ |E|.
74
Kruskals Algorithm
• Algorithm constructs a MST as
– an expanding sequence of sub graphs,
– which are always acyclic
– but are not necessarily connected on
the intermediate stages of the
algorithm.

75
Kruskals Algorithm
Working
• Sorting the graph's edges in non decreasing order
of their weights.
• Then, starting with the empty sub graph,
– it scans this sorted list adding the next edge
on the list to the current sub graph if such
an inclusion does not create a cycle and
simply skipping the edge otherwise.

76
77
78
Kruskal Algorithm
• Kruskal’s algorithm is not simpler because it
has to check whether the addition of the
next edge to the edges already selected
would create a cycle.

79
Example from Text
book

81
82
83
Kruskal’s MST Algorithm

84
Analysis
• The crucial check whether two vertices belong to
the same tree can be found out using union-find
algorithms.
• Efficiency of Kruskal’s algorithm is based on the
time needed for sorting the edge weights of a
given graph.
• With an efficient sorting algorithm, the
time efficiency of Kruskal's algorithm will
be in
O (|E| log |E|).

85
Analysis
• The crucial check whether two vertices belong to
the same tree can be found out using union-find
algorithms.
• Efficiency of Kruskal’s algorithm is based on the
time needed for sorting the edge weights of a
given graph.
• With an efficient sorting algorithm, the
time efficiency of Kruskal's algorithm will
be in
O (|E| log |E|).

86
Single Source Shortest Path
Problem Statement
• Given a graph and a source vertex in graph,
• find shortest paths from source to all vertices in
the given graph

• Dijkstra's Algorithm is the best-known algorithm


• It is similar to Prim’s algorithm
• This algorithm is applicable to undirected
and directed graphs with nonnegative
weights only.
87
Dijkstra's Algorithm
Working
• First, it finds the shortest path from the source to
a vertex nearest to it, then to a second nearest,
and so on
• In general, before its ith iteration commences, the
algorithm has already identified the shortest paths
to i-1 other vertices nearest to the source.

88
89
• The source, and the edges of the shortest paths
leading to them from the source form a subtree Ti of
the given graph shown in the figure.

• The next vertex nearest to the source can be found


among the vertices adjacent to the vertices of Ti.
called as "fringe vertices";
• They are the candidates from which Dijkstra's
algorithm selects the next vertex nearest to the
source.

90
• To identify the ith nearest vertex, the
algorithm computes, for every fringe vertex
u,
– the sum of the distance to the nearest tree
vertex v and the length d (shortest path from
the source to v) (already computed !!!)
• then selects the vertex with the smallest such
sum.

91
• To facilitate the algorithm's operations, we label
each vertex with two labels.
– The numeric label d indicates the length of the
shortest path from the source to this vertex found
by the algorithm so far
– The other label indicates the parent of the vertex
in tree being
the Sourc
constructed. e
Vertex (parent,
d)

• With such labelling, finding the next nearest vertex u*


becomes a simple task of finding a fringe vertex with
the smallest d value.

92
• After we have identified a vertex u* to be added to the tree,
we need to perform two operations:
1. Move u* from the fringe to the set of tree vertices.
2. For each remaining fringe vertex u that is connected to
u* by an edge of weight w (u*, u) such that
du*+ w(u*, u) < du ,
update the labels of u by u* and d u* + w(u*,
u), respectively.

93
94
• The shortest paths (identified by following
nonnumeric labels backward from a destination
vertex in the left column to the source) and their
lengths (given by numeric labels of the tree vertices)
are as follows:

95
Dijkstra’s Algorithm

97
Dijkstra’s
Algorithm


98
Dijkstra’s
Algorithm


99
Analysis
• Efficiency is Θ(|V|2) for graphs represented by
their weight matrix and the priority queue
implemented as an unordered array.

• For graphs represented by their adjacency lists


and the priority queue implemented as a min-
heap, it is in O ( |E| log |V| )

100
Home Work
• Apply Dijkstra's Algorithm to the
following graph. Consider a as source
vertex

101
Optimal Tree Problem
Background
• Suppose we have to encode a text that
comprises characters from some n-character
alphabet
• Encode by assigning to each of the text's
characters some sequence of bits called the
codeword.
• There are two types of
encoding: Fixed-length
encoding,
Variable-length encoding
102
Optimal Tree Problem
Fixed length Coding
• This method assigns to each character a bit string
of the same length m.
• Example: ASCII code.

• Drawback?(same length)
– How to overcome the drawback?
– assigning shorter code-words to more frequent
characters and longer code-words to less
frequent characters.

103
Optimal Tree Problem
Variable length Coding
• This method assigns code-words of different lengths to
different characters
• This introduces a problem that, how can many bits of an
encoded text represent the first character
• To avoid this, Prefix-free codes (prefix codes) are used;
Here no codeword is a prefix of a codeword of another
character.
• We can simply scan a bit string until we get the first group
of bits that is a codeword for some character, replace these
bits by this character, and repeat this operation until the
104
bit
string’s end is reached
Optimal Tree Problem
Variable length Coding
• If we want to create a binary prefix code for
some alphabet,
• it is natural to associate the alphabet's
characters with leaves of a binary tree in
which all the left edges are labelled by 0 and
all the right edges are labelled by 1 (or vice
versa).
• Many trees that can be constructed in this
manner for a given alphabet
• Since there is no simple path to a leaf that
continues to another leaf, no code word can be a
prefix of another codeword, hence any such tree 105
Optimal Tree Problem
• For known frequencies of the character
occurrences,
• Construction of such a tree that would assign
– shorter bit strings to high-frequency characters
and
–longer ones to low-frequency characters,
• It can be done by the greedy algorithm,
invented by David Huffman.

106
Huffman Trees and Codes

Huffman's Algorithm
• Step 1: Initialize n one-node trees and label them with the
characters of the alphabet. Record the frequency of each
character in its tree's root to indicate the tree's weight.
• Step 2: Repeat the following operation until a single tree is
obtained.
– Find two trees with the smallest weight. Make them the left
and right subtree of a new tree and record the sum of their
weights in the root of the new tree as its weight.
• A tree constructed by the above algorithm is called a Huffman
tree. It defines-in the manner described-a Huffman code.
107
Example

108
109
Huffman codes
• The resulting codewords are as
follows:
DAD is encoded as
10011011011101 is decoded
as

110
Analysis
• Average number of bits per symbol in this code
2 * 0.35 + 3 * 0.1+ 2 * 0.2 + 2 * 0.2 + 3 * 0.15 = 2.25.

• For this example Huffman code achieves the


compression ratio-a standard measure of a
compression algorithm’s effectiveness

Compression ratio (3−2.25)/3*100 = 25%

In other words, Huffman’s encoding of the above text


will use 25% less memory than its fixed-length
encoding.
111
Home work
• Represent the text
“ABCDEBCDECDEDEE” using huffman
coding.

112
End of Module 4

You might also like