Greedy Technique and Algorithms
Introduction
The greedy approach suggests constructing a solution through a sequence of steps, each
expanding a partially constructed solution obtained so far, until a complete solution to
the problem is reached.
Optimal Substructure : an optimal solution to the overall problem can be constructed
from optimal solutions to its subproblems. In other words, solving subproblems optimally
contributes directly to the global optimum.
At each step, and this is the central point of this technique, the Greedy Choice must be:
1. Feasible — it must satisfy the problem’s constraints.
2. Locally optimal — it must be the best local choice among all feasible options
available at that step, without considering future consequences.
3. Irrevocable — once made, the choice cannot be changed in later steps of the
algorithm.
These requirements explain the name of the technique: at each step, it encourages a
“greedy” grab of the best alternative available, in the hope that a sequence of locally
optimal choices will result in a globally optimal solution.
Does a greedy strategy work well toward the optimal solution or not?
A greedy algorithm produces an optimal solution only if both the greedy choice and optimal
substructure hold. If these conditions are not met, the algorithm may not yield the best
result. From our algorithmic perspective, the question is whether such a greedy strategy
works or not. As we shall see, there are problems for which a sequence of locally optimal
choices does yield an optimal solution for every instance of the problem in question.
However, there are others for which this is not the case; for such problems, a greedy
algorithm can still be of value if we are interested in or have to be satisfied with an
approximate solution.
We need to prove that a greedy algorithm yields an optimal solution (when it does). Three
ways to do this :
One of the common ways is to use mathematical induction, we show that a partially
constructed solution obtained by the greedy algorithm on each iteration can be
extended to an optimal solution to the problem.
Design and Analysis Algorithms(Greedy Algorithms) 1 Dr. Rafat Alhanani
The second way to prove optimality of a greedy algorithm is to show that on each step it
does at least as well as any other algorithm could in advancing toward the problem’s goal.
Finding the Minimum Number of Moves for a Chess Knight to Travel Across a
100×100 Board
The problem is to determine the minimum number of moves required for a chess knight to
move from one corner of a 100 × 100 chessboard to the diagonally opposite corner. The
knight moves in an L-shape, meaning it moves two squares either horizontally or
vertically, followed by one square in the perpendicular direction.
Greedy Solution:
Since the goal is to reach position (100,100) from (1,1), the approach is to move as close
to the target as possible in each step. The best way to achieve this is to jump in a way that
reduces the distance to the target as much as possible.
Proof of Optimality:
To measure how close we are to the goal, we use the Manhattan Distance, which is the
sum of the differences in the horizontal and vertical coordinates between two points:
Manhattan Distance=∣x1−x2∣+∣y1−y2∣
Initially, the Manhattan distance between (1,1) and (100,100) is:
∣100−1∣+∣100−1∣=99+99=198
With each move, the knight can reduce this distance by at most 3. This is because each
knight move either increases or decreases each coordinate by 1 or 2, ensuring that the total
difference always decreases by exactly 3 in the best case.
The knight moves two squares in a given direction (either horizontally or vertically).
Then, it moves one square in the perpendicular direction.
Thus, if the knight is at position (x, y), any move will take it to one of the following
positions:
(x±2,y±1)or(x±1,y±2)
To calculate the number of moves required to reach the target, we divide the total distance
by the maximum possible reduction per move:
198/3=66
This means that the minimum number of moves needed to reach (100,100) is 66, proving
that the greedy approach, which always moves the knight in the most advantageous
direction, is the optimal solution.
Design and Analysis Algorithms(Greedy Algorithms) 2 Dr. Rafat Alhanani
Conclusion:
• The greedy algorithm is efficient because it minimizes the Manhattan distance as
much as possible in every move.
• No solution can be better than 66 moves, as the knight cannot reduce the Manhattan
distance by more than 3 in any single move.
• Therefore, any other approach would still require at least 66 moves, proving that this
is the optimal solution.
The third way is simply to show that the final result obtained by a greedy algorithm is
optimal based on the algorithm’s output rather than the way it operates.
As an example, consider the problem of placing the maximum number of chips on an 8 ×
8 board so that no two chips are placed on the same or adjacent vertically, horizontally, or
diagonally squares.
Conceptual Background for Some Greedy Applications
Before listing the examples, let’s briefly clarify two foundational concepts that often arise
in greedy algorithms:
Given n points, connect them in the cheapest possible way so that there will be a path
between every pair of points. We can represent the points given by vertices of a graph,
possible connections by the graph’s edges, and the connection costs by the edge weights.
• Spanning Tree: In an undirected connected graph, a spanning tree is a subset of
edges that connects all vertices without forming any cycles.
Design and Analysis Algorithms(Greedy Algorithms) 3 Dr. Rafat Alhanani
• Minimum Spanning Tree (MST): Among all spanning trees, the MST is the one
with the minimum total edge weight. It represents the most cost-efficient way to
connect all nodes. MST is not about finding paths between specific nodes, but rather
about building an overall minimal-cost structure that connects everything.
Applications :
It has direct applications to the design of all kinds of networks including communication,
computer, transportation, and electrical, by providing the cheapest way to achieve
connectivity. It identifies clusters of points in data sets.
It has been used for classification purposes in archeology, biology, sociology, and other
sciences. It is also helpful for constructing approximate solutions to more difficult problems
such the traveling salesman problem.
In contrast:
• Shortest Path refers to the minimum-cost path between two nodes in a graph. It
doesn’t aim to connect all nodes, but just to optimize a route from a source to a
destination (or to all destinations from a source).
These concepts, though different in objective, can both be solved using greedy strategies,
provided the problem satisfies the required properties.
Min-Heap :
Design and Analysis Algorithms(Greedy Algorithms) 4 Dr. Rafat Alhanani
What is the purpose of Min-Heap in Prim's algorithm?
We always need to extract the edge with the least weight quickly.
That's why we use a Min-Heap, because it always gives us the biggest/smallest element
at the root.
Examples of Greedy Algorithms
1. Minimum Spanning Tree (MST) – Algorithms like Prim’s and Kruskal’s apply the
greedy method to incrementally build a minimum-cost tree.
2. Dijkstra’s Algorithm – Used to find the shortest path from a single source node to all
other nodes in a graph with non-negative edge weights. It greedily selects the node
with the smallest tentative distance at each step.
3. Coin Change Problem – Finding the minimum number of coins.
Advantages and Disadvantages of Greedy Algorithms
✔ Efficiency: Greedy algorithms are typically faster and easier to implement than dynamic
programming or exhaustive search.
✖ Not Always Optimal: If the problem does not satisfy both the greedy choice property
and optimal substructure, the solution might not be globally optimal.
Greedy Algorithms
We will discuss two classic algorithms for the minimum spanning tree problem: Prim’s
algorithm and Kruskal’s algorithm. What is remarkable about these algorithms is the fact
that they solve the same problem by applying the greedy approach in two different ways,
and both of them always yield an optimal solution. In addition we have another classic
algorithm Dijkstra’s algorithm for the shortest-path problem in a weighted graph.
Prim’s Algorithm
How does it work ?
Prim’s algorithm constructs : a minimum spanning tree through a sequence of expanding
subtrees. It attaches vertices to the growing minimum spanning tree by adding smallest
cost edges repeatedly.
The initial subtree consists of : a single vertex selected arbitrarily from the set V of the
graph’s vertices.
Design and Analysis Algorithms(Greedy Algorithms) 5 Dr. Rafat Alhanani
On each iteration : the algorithm expands the current tree (which is the single vertex in
the first iteration) using the greedy manner by simply attaching the nearest vertex to the
current tree. (By the nearest vertex, which has the edge of the smallest weight and does
not cause cycle in the constructed tree)
The algorithm stops after all the graph’s vertices have been included in the tree being
constructed. Since the algorithm expands a tree by exactly one vertex on each of its
iterations, the total number of such iterations is n − 1, where n is the number of vertices in
the graph. The tree generated by the algorithm is obtained as the set of edges used for the
tree expansions.
We provide information to the Prim’s algorithm by attaching two labels to a vertex:
1. The name of the nearest vertex.
2. The length (the weight) of the corresponding edge.
Vertices that are not adjacent to any of the tree vertices can be given the ∞ label indicating
their “infinite” distance to the tree vertices and a null label for the name of the nearest tree
vertex.
ab(5)
ac(8)
ad(7)
ab(5)
a, b
Design and Analysis Algorithms(Greedy Algorithms) 6 Dr. Rafat Alhanani
ALGORITHM Prim(G, start)
Input:
G: Weighted connected graph as adjacency list or matrix
start: starting vertex
Output:
T: Set of edges in the Minimum Spanning Tree (MST)
Initialize:
visited ← empty set
T ← empty set
minHeap ← priority queue ordered by edge weight
Add all edges from 'start' to minHeap
visited ← visited ∪ {start}
While minHeap is not empty AND |visited| < |V| do:
(weight, from, to) ← minHeap.pop() // edge with smallest weight
If to ∉ visited then
visited ← visited ∪ {to}
T ← T ∪ {(from, to, weight)}
For each neighbor (neighbor, w) of 'to' do
If neighbor ∉ visited then
minHeap.push((w, to, neighbor))
Return T
Design and Analysis Algorithms(Greedy Algorithms) 7 Dr. Rafat Alhanani
Prim’s Algorithm Implementation
To implement prim’s algorithm we need special data type called a priority queue.
What is a Priority Queue ?
It is a concept that is implemented using Binary Heap data structure.
A heap is stored or represented in memory as an array.
At index i:
• The parent is at (i - 1) / 2
• The left child is at 2i + 1
• The right child is at 2i + 2
Heap Basic Operations:
1. Insert
• Add the element at the end of the array
• Then bubble it up to maintain the Min-Heap property
2. Extract Min
• Take the first element in the array (which is the root)
• Move the last element to the root position
• Then bubble it down as needed to maintain the heap structure
Why do we use a Binary Heap in Prim’s Algorithm?
Because it allows us to:
• Insert a new edge (information) in O(log n)
• Extract the edge with the smallest weight in O(log n)
Let's go back to Prim’s algorithm
We applied it to the previous graph.
Design and Analysis Algorithms(Greedy Algorithms) 8 Dr. Rafat Alhanani
Now, let's see how the Min-Heap (or Priority Queue) changes step by step using a Binary
Heap in each iteration.
How do we use the Min-Heap?
In each step:
1. Extract-Min → to get the edge with the smallest weight (the goal is to make the
root contains the smallest edge, which will be used in the next round of Prim’s
algorithm)
2. Insert → add new edges from the newly added node
3. Heapify → rearrange the heap so the root remains the smallest
Initialization:
We start from node a, add the node a to the minimum spanning tree T and we add all
outgoing edges from a to the Min-Heap:
Initial Heap state:
Heap = [(3, a, b), (5, a, f), (6, a, e)]
Visual representation of the heap:
(3, a, b)
/ \
(5, a, f) (6, a, e)
Step 1:
1. Extract-Min → take the smallest edge: (3, a, b)
• Remove the root (3, a, b) from the heap, add it to the minimum spanning tree
T.
Design and Analysis Algorithms(Greedy Algorithms) 9 Dr. Rafat Alhanani
• Move the last element of the heap (6, a, e) to the root.
Now the heap is:
Heap = [(6, a, e), (5, a, f)]
Temporary tree:
(6, a, e)
/
(5, a, f)
2. Heapify (Bubble Down)
Compare with its child (5, a, f)
Since 5 < 6, we swap them
Updated heap: Heap = [(5, a, f), (6, a, e)]
Heap tree:
(5, a, f)
/
(6, a, e)
3. Add node b to the minimum spanning tree T,
Then insert the outgoing edges from b that go to unvisited nodes to the heap:
Edges: (1, b, c) and (4, b, f) to the heap.
Heap = [(5, a, f), (6, a, e), (1, b, c), (4, b, f)]
Temporary Heap Tree:
(5, a, f)
/ \
(6, a, e) (1, b, c)
/
(4, b, f)
Design and Analysis Algorithms(Greedy Algorithms) 10 Dr. Rafat Alhanani
4. Heapify (Bubble Down) the heap to maintain Min-Heap :
Updated heap:
Heap = [(1, b, c), (4, b, f), (5, a, f), (6, a, e)]
Final heap structure for this step:
(1, b, c)
/ \
(4, b, f) (5, a, f)
/
(6, a, e)
Now the root contains the smallest edge, which will be used in the next round of Prim’s
algorithm.
……………………………………………………………………………..
Step 2:
Extract-Min → (1, b, c)
Add c to the minimum spanning tree T.
Add the edges from c: (4, c, f), (6, c, d) to the heap
Now (before reordering):
Heap = [(4, b, f), (5, a, f), (6, a, e), (4, c, f), (6, c, d)]
Reorder the heap to become:
Heap = [(4, b, f), (4, c, f), (5, a, f), (6, a, e), (6, c, d)]
(4, b, f)
/ \
(4, c, f) (6, a, e)
/ \
(5, a, f) (6, c, d)
………………………………………………………………………………………
Design and Analysis Algorithms(Greedy Algorithms) 11 Dr. Rafat Alhanani
Step 3:
Extract-Min → (4, b, f)
Add f to the minimum spanning tree T.
Add the edges from f: (2, f, e), (5, f, d) to the heap
After adding and reordering:
(2, f, e)
/ \
(4, c, f) (6, a, e)
/ \ /
(5, f, d) (6, c, d) (5, a, f)
……………………………………………………………………………………
Step 4:
Extract-Min → (2, f, e)
Add e to the minimum spanning tree T.
Add the edge from e: (8, e, d) to the heap
After adding:
(4, c, f)
/ \
(5, a, f) (6, a, e)
/ \ /
(5, f, d) (6, c, d) (8, e, d)
…………………………………………………………………………………….
Step 5:
Extract-Min → (4, c, f)
But f was visited previously, ignore it
Design and Analysis Algorithms(Greedy Algorithms) 12 Dr. Rafat Alhanani
Then Extract-Min → (5, a, f)
f was also visited previously, ignore
Then (5, f, d)
Add d to the minimum spanning tree T as the last node
--------------------------------------------------------------------------------------------------
Kruskal’s Algorithm
The algorithm begins by sorting the graph’s edges in nondecreasing order of their
weights. Then, starting with the empty subgraph, it scans this sorted list, adding the next
edge on the list to the current subgraph if such an inclusion does not create a cycle and
simply skipping the edge otherwise. Kruskal’s Algorithm follows the greedy approach
in each iteration as it finds the edge which has the least weight and adds it to the the
growing minimum spanning tree.
Algorithm steps :
1. Sorting the graph’s edges in nondecreasing order of their weights.
2. Pick the smallest edge, check if it forms a cycle in the constructed minimum
spanning tree. If this edge does not form a cycle add it to the minimum
spanning tree otherwise discard it.
3. Repeat step 2 until there are (V-1) edges are added to the minimum spanning
tree.
Complexity ?
Design and Analysis Algorithms(Greedy Algorithms) 13 Dr. Rafat Alhanani
Design and Analysis Algorithms(Greedy Algorithms) 14 Dr. Rafat Alhanani
Dijkistra’s Algorithm :
This algorithm has many variants, the most common one is to find the shortest path from
the source vertex to all other vertices of the graph. It does not work in a graph that
contains negative weight edges.
Single Source Shortest Path Algorithm :
Dijkistra follows the greedy approach in each iteration to select the optimal chois.
Applications include:
• Network routing
• GPS pathfinding
• Task scheduling
• Operating systems (e.g., process scheduling)
Algorithm Steps :
1.select the source node (the initial vertex) and set distance = 0 to this node.
2. Set the distance = ∞ to all other nodes
3. add all the nodes to an unvisited Set.
4. While there are unvisited nodes:
a. Select the unvisited node with the smallest distance as current.
b. For each neighbor N of the current:
- If distance through the current to this neighbor N < known distance to this N:
Relaxation → update this neighbor's distance = (the actual distance of current +
the weight of the edge from current to this N).
c. Remove current from the unvisited set and mark the current as visited.
5. Repeate step 4-a.
Design and Analysis Algorithms(Greedy Algorithms) 15 Dr. Rafat Alhanani
Example :
0
4 a -
b a
2
c a
9 ?
d a
∞ ? f a
∞ 10
∞ ?
Complexity ?
Design and Analysis Algorithms(Greedy Algorithms) 16 Dr. Rafat Alhanani
Algorithm Dijkstra(G, start):
Input: Graph G with nodes and weighted edges, starting node start
Output: Shortest distances from start to all other nodes
for each node v in G:
dist[v] ← ∞
visited[v] ← false
dist[start] ← 0
while there exists an unvisited node:
u ← unvisited node with smallest dist[u]
visited[u] ← true
for each neighbor v of u:
if not visited[v]:
alt ← dist[u] + weight(u, v)
if alt < dist[v]:
dist[v] ← alt
return dist
Design and Analysis Algorithms(Greedy Algorithms) 17 Dr. Rafat Alhanani