Algorithm Analysis
his document provides a detailed analysis of various algorithms, including their
T
pseudocode, recurrence relations (if applicable), and time complexities for best,
average, and worst-case scenarios.
1. Merge Sort
erge Sort is a divide-and-conquer algorithm that recursively divides an array into
M
two halves, sorts them, and then merges the sorted halves.
Pseudocode
MERGE_SORT(A, p, r)
if p < r
q = floor((p + r) / 2)
MERGE_SORT(A, p, q)
MERGE_SORT(A, q + 1, r)
MERGE(A, p, q, r)
MERGE(A, p, q, r)
n1 = q - p + 1
n2 = r - q
create L[1..n1+1] and R[1..n2+1]
for i = 1 to n1
L[i] = A[p + i - 1]
for j = 1 to n2
R[j] = A[q + j]
L[n1 + 1] = infinity
R[n2 + 1] = infinity
i = 1
j = 1
for k = p to r
if L[i] <= R[j]
A[k] = L[i]
i = i + 1
else
A[k] = R[j]
j = j + 1
Recurrence Relation
T(n)=2T(n/2)+O(n)
Time Complexity
● Best Case:O(nlogn)
● Average Case:O(nlogn)
● Worst Case:O(nlogn)
2. Insertion Sort
Insertion Sort builds the final sorted array (or list) one item at a time. It iterates
through the input elements and grows a sorted output list. Each element is removed
from the input data, and inserted into the correct position in the already sorted list.
Pseudocode
INSERTION_SORT(A)
for j = 2 to A.length
key = A[j]
i = j - 1
while i > 0 and A[i] > key
A[i + 1] = A[i]
i = i - 1
A[i + 1] = key
Recurrence Relation
ot typically expressed with a recurrence relation for its iterative nature, but rather
N
analyzed directly.
Time Complexity
● Best Case:O(n) (when the array is already sorted)
● Average Case:O(n2)
● Worst Case:O(n2) (when the array is sorted in reverseorder)
3. Quick Sort
uick Sort is an efficient, comparison-based, in-place sorting algorithm. It works by
Q
selecting a 'pivot' element from the array and partitioning the other elements into two
sub-arrays, according to whether they are less than or greater than the pivot. The
sub-arrays are then sorted recursively.
Pseudocode
QUICK_SORT(A, p, r)
if p < r
q = PARTITION(A, p, r)
QUICK_SORT(A, p, q - 1)
QUICK_SORT(A, q + 1, r)
PARTITION(A, p, r)
x = A[r] // pivot
i = p - 1
for j = p to r - 1
if A[j] <= x
i = i + 1
exchange A[i] with A[j]
exchange A[i + 1] with A[r]
return i + 1
Recurrence Relation
● Worst Case:T(n)=T(n−1)+T(0)+O(n)=T(n−1)+O(n)
● Best/Average Case:T(n)=2T(n/2)+O(n)
Time Complexity
● Best Case:O(nlogn) (pivot is always the median)
● Average Case:O(nlogn)
● Worst Case:O(n2) (pivot is always the smallest orlargest element)
4. Selection Sort
election Sort sorts an array by repeatedly finding the minimum element (considering
S
ascending order) from the unsorted part and putting it at the beginning. The
algorithm maintains two sub-arrays in a given array:
1. The sub-array which is already sorted.
2. The remaining sub-array which is unsorted.
Pseudocode
SELECTION_SORT(A)
n = A.length
for i = 1 to n - 1
min_idx = i
for j = i + 1 to n
if A[j] < A[min_idx]
min_idx = j
// Swap the found minimum element with the first element
exchange A[min_idx] with A[i]
Recurrence Relation
Not typically expressed with a recurrence relation for its iterative nature.
Time Complexity
● Best Case:O(n2)
● Average Case:O(n2)
● Worst Case:O(n2)
5. Bubble Sort
ubble Sort is a simple sorting algorithm that repeatedly steps through the list,
B
compares adjacent elements and swaps them if they are in the wrong order. The pass
through the list is repeated until no swaps are needed, which indicates that the list is
sorted.
Pseudocode
BUBBLE_SORT(A)
n = A.length
for i = 1 to n - 1
// Last i elements are already in place
for j = 1 to n - i
if A[j] > A[j + 1]
exchange A[j] with A[j + 1]
Recurrence Relation
Not typically expressed with a recurrence relation for its iterative nature.
Time Complexity
● Best Case:O(n) (when the array is already sorted)
● Average Case:O(n2)
● Worst Case:O(n2) (when the array is sorted in reverseorder)
6. Bucket Sort
ucket Sort is a non-comparison sorting algorithm that distributes elements of an
B
array into a number of buckets. Each bucket is then sorted individually, either using a
different sorting algorithm, or by recursively applying the bucket sort algorithm.
Pseudocode
BUCKET_SORT(A)
n = A.length
create n empty buckets B[0...n-1]
for i = 1 to n
insert A[i] into bucket B[floor(n * A[i])] // Assuming A[i] are in [0, 1)
for i = 0 to n-1
sort bucket B[i] with INSERTION_SORT
concatenate the buckets B[0...n-1] into A
Recurrence Relation
ot typically expressed with a recurrence relation due to its non-recursive nature and
N
dependence on the distribution of elements.
Time Complexity
● Best Case:O(n+k) (where k is the number of buckets,and elements are uniformly
istributed)
d
Average Case:O(n+k) (uniform distribution)
●
● Worst Case:O(n2) (all elements fall into a singlebucket)
7. Radix Sort
adix Sort is a non-comparison integer sorting algorithm that sorts data with integer
R
keys by grouping keys by individual digits which share the same significant position
and value.
Pseudocode
RADIX_SORT(A, d) // A is the array, d is the number of digits
for i = 1 to d
// Sort array A using counting sort on digit i
COUNTING_SORT_BY_DIGIT(A, i)
COUNTING_SORT_BY_DIGIT(A, digit)
n = A.length
create array B[1...n] and C[0...k] (where k is the max digit value, e.g., 9 for decimal)
for j = 1 to n
C[get_digit(A[j], digit)] = C[get_digit(A[j], digit)] + 1
for i = 1 to k
C[i] = C[i] + C[i-1]
for j = n down to 1
B[C[get_digit(A[j], digit)]] = A[j]
C[get_digit(A[j], digit)] = C[get_digit(A[j], digit)] - 1
for j = 1 to n
A[j] = B[j]
Recurrence Relation
Not typically expressed with a recurrence relation.
Time Complexity
● Best Case:O(d(n+k)) (where d is the number of digits,n is the number of
lements, k is the base of the numbers, e.g., 10 for decimal)
e
Average Case:O(d(n+k))
●
● Worst Case:O(d(n+k))
8. 8 Queens Problem
he Eight Queens Puzzle is the problem of placing eight chess queens on an 8×8
T
chessboard so that no two queens threaten each other. Thus, a solution requires that
no two queens share the same row, column, or diagonal. This is typically solved using
backtracking.
Pseudocode
SOLVE_N_QUEENS(board_size)
board = create board_size x board_size empty board
solve_queens_util(board, 0, board_size)
SOLVE_QUEENS_UTIL(board, row, board_size)
if row >= board_size
print board // Found a solution
return true
for col = 0 to board_size - 1
if IS_SAFE(board, row, col, board_size)
board[row][col] = 1 // Place queen
if solve_queens_util(board, row + 1, board_size) == true
return true // If a solution is found for next rows, propagate true
board[row][col] = 0 // Backtrack: remove queen
return false // No solution for this row
IS_SAFE(board, row, col, board_size)
// Check this row on left side
for i = 0 to col - 1
if board[row][i] == 1
return false
/ / Check upper diagonal on left side
for i = row, j = col; i >= 0 and j >= 0; i--, j--
if board[i][j] == 1
return false
/ / Check lower diagonal on left side
for i = row, j = col; i < board_size and j >= 0; i++, j--
if board[i][j] == 1
return false
return true
Recurrence Relation
he recurrence relation for the N-Queens problem is complex due to the pruning
T
nature of backtracking. It's not a simple divide-and-conquer recurrence. The search
space is NN, but pruning significantly reduces this.
Time Complexity
● Best Case:Not well-defined, as it's a search problem.The first solution found
ight be considered "best".
m
Average Case:Highly dependent on the implementationand board size, but
●
enerally exponential.
g
Worst Case:O(N!) or O(NN) in the worst case withoutpruning. With effective
●
pruning, it's closer to O(N!) because each queen must be in a different column.
9. Tower of Hanoi
he Tower of Hanoi is a mathematical puzzle. It consists of three rods and a number of disks
T
of different sizes, which can slide onto any rod. The puzzle starts with the disks in a neat stack
in ascending order of size on one rod, the smallest at the top, thus making a conical shape.
The objective of the puzzle is to move the entire stack to another rod, obeying the following
simple rules:
1. Only one disk can be moved at a time.
2. Each move consists of taking the upper disk from one of the stacks and placing it
n top of another stack or on an empty rod.
o
. No disk may be placed on top of a smaller disk.
3
Pseudocode
TOWER_OF_HANOI(n, source, auxiliary, destination)
if n == 1
print "Move disk 1 from " + source + " to " + destination
return
TOWER_OF_HANOI(n - 1, source, destination, auxiliary)
print "Move disk " + n + " from " + source + " to " + destination
TOWER_OF_HANOI(n - 1, auxiliary, source, destination)
Recurrence Relation
(n)=2T(n−1)+1
T
Base case: T(1)=1
Time Complexity
● Best Case:O(2n)
● Average Case:O(2n)
● Worst Case:O(2n)
10. Floyd Warshall Algorithm
he Floyd Warshall Algorithm is an algorithm for finding shortest paths in a directed
T
weighted graph with positive or negative edge weights (but no negative cycles). A
single execution of the algorithm will find the lengths (summed weights) of shortest
paths between all pairs of vertices.
Pseudocode
FLOYD_WARSHALL(graph)
n = number of vertices in graph
dist = create n x n matrix, initialized with graph weights
// Initialize dist with direct edge weights or infinity if no direct edge
// dist[i][i] = 0
for k = 1 to n
for i = 1 to n
for j = 1 to n
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
return dist
Recurrence Relation
ot typically expressed with a recurrence relation, as it's an iterative dynamic
N
programming algorithm.
Time Complexity
● Best Case:O(V3) (where V is the number of vertices)
● Average Case:O(V3)
● Worst Case:O(V3)
11. Knapsack Problem (0/1 Knapsack)
iven a set of items, each with a weight and a value, determine the number of each
G
item to include in a collection so that the total weight is less than or equal to a given
limit and the total value is as large as possible. The 0/1 Knapsack problem means each
item can either be taken or not taken (no fractions).
Pseudocode (Dynamic Programming)
KNAPSACK(W, wt, val, n)
// W: maximum weight capacity
// wt: array of weights of items
// val: array of values of items
// n: number of items
K = create (n+1) x (W+1) matrix
for i = 0 to n
for w = 0 to W
if i == 0 or w == 0
K[i][w] = 0
else if wt[i-1] <= w
K[i][w] = max(val[i-1] + K[i-1][w - wt[i-1]], K[i-1][w])
else
K[i][w] = K[i-1][w]
return K[n][W]
Recurrence Relation
[i][w] = \max(val[i-1] + K[i-1][w - wt[i-1}], K[i-1][w]) if wt[i−1]≤w
K
K[i][w]=K[i−1][w] if wt[i−1]>w
Base cases: K[0][w]=0 for all w, and K[i][0]=0 for all i.
Time Complexity
● Best Case:O(nW)
● Average Case:O(nW)
● Worst Case:O(nW) (where n is the number of itemsand W is the knapsack
capacity)
12. Huffman Coding
uffman Coding is a data compression algorithm. It uses a specific method for
H
choosing the representation for each symbol, resulting in a prefix code that is optimal
when the frequencies of occurrence of symbols are known.
Pseudocode
HUFFMAN(C) // C is a set of characters and their frequencies
n = |C|
Q = C // Min-priority queue of nodes, initially containing leaves for each character
for i = 1 to n - 1
z = new node
x = EXTRACT_MIN(Q)
y = EXTRACT_MIN(Q)
z.left = x
z.right = y
z .freq = x.freq + y.freq
INSERT(Q, z)
return EXTRACT_MIN(Q) // Root of the Huffman tree
Recurrence Relation
ot typically expressed with a recurrence relation, as it's an iterative greedy algorithm
N
building a tree.
Time Complexity
● Best Case:O(nlogn)
● Average Case:O(nlogn)
● Worst Case:O(nlogn) (where n is the number of uniquecharacters, dominated
by priority queue operations)
13. Job Sequencing with Deadlines
iven a set of jobs, where each job has a deadline and a profit. We need to find a
G
subset of jobs such that each job in the subset is completed by its deadline and the
total profit is maximized. Each job takes unit time.
Pseudocode (Greedy Approach)
JOB_SEQUENCING(jobs) // jobs is a list of (id, deadline, profit) tuples
Sort jobs in decreasing order of profit
max_deadline = find maximum deadline among all jobs
result = array of size max_deadline, initialized with empty slots
count_ jobs = 0
total_profit = 0
for each job in sorted jobs:
// Find a free slot for this job, starting from its deadline backwards
for t = job.deadline down to 1:
if result[t-1] is empty:
result[t-1] = job.id
total_profit = total_profit + job.profit
count_ jobs = count_ jobs + 1
break // Job placed, move to next job
return (count_ jobs, total_profit, result)
Recurrence Relation
Not applicable, as it's a greedy iterative algorithm.
Time Complexity
● Best Case:O(NlogN) (dominated by sorting)
● Average Case:O(NlogN+N⋅M) where M is the max deadline.In worst case, M can
e N, so O(N2).
b
Worst Case:O(N2) (if M is large, or O(NlogN) if usinga disjoint set data structure
●
for slot finding)
14. Kruskal's Algorithm
ruskal's Algorithm is a greedy algorithm to find a Minimum Spanning Tree (MST) for a
K
connected, undirected graph. It finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized.
Pseudocode
KRUSKAL(G) // G is a graph with vertices V and edges E
MST = empty set
Sort all edges in E in non-decreasing order of their weight
For each vertex v in V:
MAKE_SET(v) // Each vertex is its own set
For each edge (u, v) from E (in sorted order):
if FIND_SET(u) != FIND_SET(v):
Add (u, v) to MST
UNION_SETS(u, v)
return MST
( Assumes a Disjoint Set Union (DSU) data structure for MAKE_SET, FIND_SET,
UNION_SETS)
Recurrence Relation
Not applicable, as it's an iterative greedy algorithm.
Time Complexity
● Best Case:O(ElogE) or O(ElogV) (dominated by sorting edges, where E is
umber of edges, V is number of vertices)
n
Average Case:O(ElogE) or O(ElogV)
●
● Worst Case:O(ElogE) or O(ElogV)
15. Prim's Algorithm
rim's Algorithm is a greedy algorithm that finds a Minimum Spanning Tree (MST) for a
P
weighted undirected graph. It grows the MST by adding vertices one by one into a
growing tree.
Pseudocode
PRIM(G, start_vertex) // G is a graph, start_vertex is the initial vertex
MST_edges = empty set
min_cost = array of size |V|, initialized with infinity
parent = array of size |V|, initialized with null
visited = array of size |V|, initialized with false
in_cost[start_vertex] = 0
m
Q = Min-Priority Queue, add all vertices with their min_cost as key
while Q is not empty:
u = EXTRACT_MIN(Q) // Vertex with the smallest min_cost
visited[u] = true
for each neighbor v of u:
if not visited[v] and weight(u, v) < min_cost[v]:
min_cost[v] = weight(u, v)
parent[v] = u
DECREASE_KEY(Q, v, min_cost[v]) // Update priority in Q
/ / Reconstruct MST from parent array or add edges as they are selected
// For each v != start_vertex, add edge (v, parent[v]) to MST_edges
return MST_edges
Recurrence Relation
Not applicable, as it's an iterative greedy algorithm.
Time Complexity
● Best Case:O(ElogV) or O(E+VlogV) (using a Fibonacci heap)
● Average Case:O(ElogV) or O(E+VlogV)
● Worst Case:O(ElogV) (using a binary heap) or O(E+VlogV)(using a Fibonacci
heap)
16. Dijkstra's Algorithm
ijkstra's Algorithm is an algorithm for finding the shortest paths between nodes in a
D
graph, which may represent, for example, road networks. It works for graphs with
non-negative edge weights.
Pseudocode
DIJKSTRA(G, source) // G is a graph, source is the starting vertex
dist = array of size |V|, initialized with infinity
dist[source] = 0
prev = array of size |V|, initialized with null // To reconstruct path
Q = Min-Priority Queue, add all vertices with their dist as key
while Q is not empty:
u = EXTRACT_MIN(Q) // Vertex with the smallest dist
if dist[u] == infinity
break // Remaining vertices are unreachable
for each neighbor v of u:
alt = dist[u] + weight(u, v)
if alt < dist[v]:
dist[v] = alt
prev[v] = u
DECREASE_KEY(Q, v, alt) // Update priority in Q
return dist, prev
Recurrence Relation
Not applicable, as it's an iterative greedy algorithm.
Time Complexity
● Best Case:O(ElogV) (using a binary heap) or O(E+VlogV)(using a Fibonacci
eap)
h
Average Case:O(ElogV) or O(E+VlogV)
●
● Worst Case:O(ElogV) (using a binary heap) or O(E+VlogV) (using a Fibonacci
heap)
17. Bellman-Ford Algorithm
he Bellman-Ford Algorithm computes shortest paths from a single source vertex to
T
all other vertices in a weighted digraph. It is more versatile than Dijkstra's algorithm
because it is capable of handling graphs in which some of the edge weights are
negative numbers.
Pseudocode
BELLMAN_FORD(G, source) // G is a graph, source is the starting vertex
dist = array of size |V|, initialized with infinity
dist[source] = 0
/ / Relax edges |V| - 1 times
for i = 1 to |V| - 1:
for each edge (u, v) with weight w in G.edges:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w
/ / Check for negative cycles
for each edge (u, v) with weight w in G.edges:
if dist[u] + w < dist[v]:
print "Graph contains negative cycle"
return false // Or handle error
return true, dist
Recurrence Relation
Not applicable, as it's an iterative algorithm.
Time Complexity
● Best Case:O(V⋅E)
● Average Case:O(V⋅E)
● Worst Case:O(V⋅E) (where V is the number of verticesand E is the number of
edges)
18. Breadth-First Search (BFS)
readth-First Search (BFS) is an algorithm for traversing or searching tree or graph
B
data structures. It starts at the tree root (or some arbitrary node of a graph,
sometimes referred to as a 'search key'), and explores all of the neighbor nodes at the
present depth before moving on to the nodes at the next depth level.
Pseudocode
BFS(G, start_node) // G is a graph, start_node is the starting vertex
create a queue Q
create a set visited_nodes
.enqueue(start_node)
Q
visited_nodes.add(start_node)
while Q is not empty:
u = Q.dequeue()
print u // Process node u
for each neighbor v of u:
if v is not in visited_nodes:
visited_nodes.add(v)
Q.enqueue(v)
Recurrence Relation
Not applicable, as it's an iterative algorithm.
Time Complexity
● Best Case:O(V+E) (when the target node is found quickly,but still visits all
r eachable nodes)
Average Case:O(V+E)
●
● Worst Case:O(V+E) (where V is the number of verticesand E is the number of
edges)
19. Depth-First Search (DFS)
epth-First Search (DFS) is an algorithm for traversing or searching tree or graph
D
data structures. The algorithm starts at the root (or some arbitrary node) and explores
as far as possible along each branch before backtracking.
Pseudocode
DFS(G, start_node) // G is a graph, start_node is the starting vertex
create a stack S
create a set visited_nodes
.push(start_node)
S
visited_nodes.add(start_node)
while S is not empty:
u = S.pop()
print u // Process node u
for each neighbor v of u: // Order of neighbors can affect path, but not complexity
if v is not in visited_nodes:
visited_nodes.add(v)
S.push(v)
/ / Recursive version (more common for explanation)
DFS_RECURSIVE(G, u, visited_nodes)
visited_nodes.add(u)
print u // Process node u
for each neighbor v of u:
if v is not in visited_nodes:
DFS_RECURSIVE(G, v, visited_nodes)
/ / Initial call:
// visited_nodes = empty set
// for each vertex u in G:
// if u is not in visited_nodes:
// DFS_RECURSIVE(G, u, visited_nodes)
Recurrence Relation
ot typically expressed with a recurrence relation for its iterative or recursive traversal
N
nature, but rather analyzed directly.
Time Complexity
● Best Case:O(V+E) (when the target node is found quickly, but still visits all
r eachable nodes)
Average Case:O(V+E)
●
● Worst Case:O(V+E) (where V is the number of verticesand E is the number of
edges)
20. Strassen's Matrix Multiplication
trassen's Algorithm is a divide-and-conquer algorithm for matrix multiplication. It is
S
faster than the standard matrix multiplication algorithm for large matrices.
Pseudocode
STRASSEN_MULTIPLY(A, B)
n = A.rows // Assume A and B are n x n matrices, n is a power of 2
C = new n x n matrix
if n == 1
C[0][0] = A[0][0] * B[0][0]
return C
/ / Divide A, B, C into n/2 x n/2 sub-matrices
A11, A12, A21, A22 = sub-matrices of A
B11, B12, B21, B22 = sub-matrices of B
C11, C12, C21, C22 = sub-matrices of C
/ / Calculate 7 products recursively
P1 = STRASSEN_MULTIPLY(A11 + A22, B11 + B22)
P2 = STRASSEN_MULTIPLY(A21 + A22, B11)
P3 = STRASSEN_MULTIPLY(A11, B12 - B22)
P4 = STRASSEN_MULTIPLY(A22, B21 - B11)
P5 = STRASSEN_MULTIPLY(A11 + A12, B22)
P6 = STRASSEN_MULTIPLY(A21 - A11, B11 + B12)
P7 = STRASSEN_MULTIPLY(A12 - A22, B21 + B22)
/ / Combine products to get C sub-matrices
C11 = P1 + P4 - P5 + P7
C12 = P3 + P5
C21 = P2 + P4
C22 = P1 - P2 + P3 + P6
return C // Recombine C11, C12, C21, C22 into C
Recurrence Relation
(n)=7T(n/2)+O(n2)
T
Base case: T(1)=O(1)
Time Complexity
● Best Case:O(nlog27)≈O(n2.807)
● Average Case:O(nlog27)≈O(n2.807)
● Worst Case:O(nlog27)≈O(n2.807)