(i) Brute-Force Algorithms
Introduction to Brute-Force Algorithms
Brute-force algorithms solve problems in the most straightforward way, typically by
exhaustively enumerating all possibilities and selecting the optimal solution. While simple and
widely applicable, they often have subpar performance on large datasets.
Applications of Brute-Force
Powering a Number: Computing aba^b by multiplying aa to itself bb times.
Selection Sort: Sorting a list by repeatedly finding the smallest element.
Exhaustive Search: Searching all possible combinations in a problem domain.
0/1 Knapsack Problem: Generating all subsets of items to find the optimal solution.
Assignment Problem: Trying all permutations of tasks to minimize cost.
Selection Sort
Algorithm Design
Selection sort rearranges a list of n items into non decreasing order using these steps:
1. Scan the entire list to find the smallest item.
2. Swap it with the first item; the first element is now sorted.
3. Repeat the process for the remaining n−1n-1 elements, placing each smallest element in
its correct position.
4. Continue for a total of n−1n-1 iterations until the list is fully sorted.
Example Execution
Sort the list: 89, 45, 68, 90, 29, 34, 17.
Each iteration identifies the smallest element and places it in its final position:
Initial: | 89, 45, 68, 90, 29, 34, 17
After 1st Pass: 17 | 45, 68, 90, 29, 34, 89
After 2nd Pass: 17, 29 | 68, 90, 45, 34, 89
After 3rd Pass: 17, 29, 34 | 90, 45, 68, 89
After 4th Pass: 17, 29, 34, 45 | 90, 68, 89
After 5th Pass: 17, 29, 34, 45, 68 | 90, 89
After 6th Pass: 17, 29, 34, 45, 68, 89 | 90
Analysis
Selection sort has a time complexity of Θ(n2)\Theta(n^2), as it uses two nested loops:
Outer loop iterates n−1n-1 times.
Inner loop searches for the smallest remaining element in O(n)O(n) steps.
Despite its simplicity, selection sort is inefficient for large datasets but works well for small or
nearly sorted lists.
Exhaustive Search
Algorithm Design
Exhaustive search is a brute-force approach for solving combinatorial problems like
permutations, combinations, and subsets:
1. Generate all elements in the problem domain (all possible solutions).
2. Identify those that satisfy the problem's constraints.
3. Select the most desirable solution (e.g., maximize or minimize an objective function).
Example: 0/1 Knapsack Problem
Problem Statement: Given nn items with weights w1,w2,…,wnw_1, w_2, \dots, w_n
and values v1,v2,…,vnv_1, v_2, \dots, v_n, and a knapsack with capacity WW, find the
most valuable subset of items that fits.
Steps:
1. Generate all subsets of nn items (2n2^n subsets).
2. Calculate the weight of each subset and discard infeasible solutions.
3. Find the subset with the maximum total value.
Analysis
The time complexity is Ω(2n)\Omega(2^n), making it infeasible for large n. Exhaustive search is
only practical for small datasets.
Assignment Problem
Algorithm Design
The assignment problem involves assigning n people to n jobs such that the cost of assignments
is minimized:
1. Generate all permutations of nn people assigned to nn jobs.
2. Calculate the cost of each permutation.
3. Select the permutation with the minimum cost.
Analysis
With n!n! permutations, the time complexity is Θ(n!)\Theta(n!). Like the knapsack problem, this
approach is impractical for large values of nn due to its exponential growth.
Conclusion
Brute-force algorithms are:
Simple and Direct: Based on the problem’s definition.
Widely Applicable: Suitable for a variety of problems.
Inefficient for Large Inputs: Their performance is often inferior to more sophisticated
techniques.
Useful for Benchmarking: They serve as a baseline to compare the efficiency of other
algorithms.
(ii) Divide-and-Conquer Algorithms
Introduction to Divide-and-Conquer Algorithms
Divide-and-conquer is a problem-solving approach that involves dividing the problem into
smaller subproblems, solving these subproblems independently, and combining their solutions to
solve the original problem. It is particularly effective for recursive problems.
Applications of Divide-and-Conquer
Merge Sort: Recursively divides an array, sorts the subarrays, and merges them into a
sorted array.
Quick Sort: Partitions an array around a pivot and recursively sorts the partitions.
Binary Search: Repeatedly divides a sorted array to find a target element.
Matrix Multiplication (Strassen’s Algorithm): Divides matrices into smaller
submatrices for faster computation.
Fast Fourier Transform (FFT): Efficiently computes the Discrete Fourier Transform
(DFT) of a sequence.
Merge Sort
Algorithm Design
Merge sort sorts an array of nn items using these steps:
1. Divide the array into two halves.
2. Recursively sort each half.
3. Merge the two sorted halves into one sorted array.
Example Execution
Sort the array: 38, 27, 43, 3, 9, 82, 10.
Step 1: Divide the array into two halves: [38, 27, 43] and [3, 9, 82, 10].
Step 2: Recursively divide each half until single-element arrays remain:
[38], [27], [43], [3], [9], [82], [10].
Step 3: Merge the arrays step-by-step:
[27, 38], [3, 9], [10, 82], [27, 38, 43], [3, 9, 10, 82].
Final Step: Merge [27, 38, 43] and [3, 9, 10, 82] into [3, 9, 10, 27, 38, 43, 82].
Analysis
Time Complexity: Θ(nlogn)\Theta(n \log n), as the array is divided logn\log n times
and merging takes O(n)O(n) at each level.
Space Complexity: O(n)O(n), for temporary arrays used during merging.
Merge sort is efficient for large datasets and is a stable sorting algorithm.
Quick Sort
Algorithm Design
Quick sort sorts an array of nn items using these steps:
1. Select a pivot element.
2. Partition the array into two subarrays: elements smaller than the pivot and elements larger
than the pivot.
3. Recursively sort the subarrays.
4. Combine the sorted subarrays with the pivot.
Example Execution
Sort the array: 38, 27, 43, 3, 9, 82, 10.
Step 1: Choose pivot = 10. Partition: [3, 9] (smaller), [38, 27, 43, 82] (larger).
Step 2: Recursively sort [3, 9] and [38, 27, 43, 82].
Step 3: Result: [3, 9, 10, 27, 38, 43, 82].
Analysis
Best and Average Time Complexity: O(nlogn)O(n \log n), when partitions are
balanced.
Worst Time Complexity: O(n2)O(n^2), when partitions are unbalanced (e.g., pivot is
smallest or largest).
Space Complexity: O(logn)O(\log n), due to recursive calls.
Quick sort is efficient for in-place sorting and generally faster than merge sort for average cases.
Binary Search
Algorithm Design
Binary search finds a target element in a sorted array of nn items using these steps:
1. Divide the array into two halves.
2. Compare the target with the middle element:
o If equal, return the position.
o If smaller, search the left half.
o If larger, search the right half.
3. Repeat until the target is found or the search space is empty.
Example Execution
Find 27 in the array: 3, 9, 10, 27, 38, 43, 82.
Compare with middle element (27).
Match found. Return the position.
Analysis
Time Complexity: O(logn)O(\log n), as the search space is halved at each step.
Space Complexity: O(1)O(1), as no additional storage is required.
Binary search is highly efficient for searching in sorted datasets.
Conclusion
The Divide-and-Conquer approach:
Efficient for Large Datasets: Reduces problem size at each step, leading to logarithmic
or linearithmic complexity.
Widely Applicable: Used in sorting, searching, matrix operations, and signal processing.
Recursive Nature: Makes it suitable for problems that can be broken down into
independent subproblems.
Optimized Performance: Often outperforms brute-force methods for large-scale
problems.
(iii) Greedy Algorithms
Introduction to Greedy Algorithms
Greedy algorithms solve optimization problems by making a series of choices, each of which is
the best local decision at the moment, with the hope that this leads to a globally optimal solution.
They are efficient for certain types of problems where a greedy choice guarantees the optimal
solution.
Applications of Greedy Algorithms
Activity Selection Problem: Choosing the maximum number of non-overlapping
activities.
Huffman Coding: Encoding data to minimize storage or transmission cost.
Prim’s Algorithm: Finding the Minimum Spanning Tree (MST) of a graph.
Kruskal’s Algorithm: Constructing an MST by choosing edges with the least weight.
Dijkstra’s Algorithm: Finding the shortest path from a source node to all others in a
weighted graph.
Activity Selection Problem
Algorithm Design
Given nn activities with start and finish times, find the maximum number of activities that can be
performed by a single person.
1. Sort activities by their finish times.
2. Select the first activity and mark its finish time.
3. Iterate through the remaining activities and select an activity if its start time is greater
than or equal to the previously selected activity’s finish time.
Example Execution
Activities: [(1, 4), (3, 5), (0, 6), (5, 7), (3, 9), (5, 9), (6, 10), (8, 11), (8, 12), (2, 14), (12, 16)]
1. Sort: [(1, 4), (3, 5), (0, 6), (5, 7), (3, 9), (5, 9), (6, 10), (8, 11), (8, 12), (12, 16), (2, 14)].
2. Select: (1, 4), (5, 7), (8, 11), (12, 16).
Analysis
Time Complexity: O(nlogn)O(n \log n), due to sorting.
Space Complexity: O(1)O(1), as no additional structures are used.
The greedy choice guarantees that we can fit the maximum number of activities.
Huffman Coding
Algorithm Design
Huffman coding compresses data using variable-length codes for characters, where more
frequent characters have shorter codes.
1. Count the frequency of each character in the data.
2. Build a priority queue with each character as a node.
3. Repeatedly combine the two nodes with the smallest frequencies into a single node until
only one tree remains.
4. Assign binary codes to characters based on their position in the tree.
Example Execution
Data: "ABRACADABRA"
Frequency: A: 5, B: 2, R: 2, C: 1, D: 1.
Build tree and assign codes:
o A: 0, B: 10, R: 11, C: 110, D: 111.
Encoded data: 011101110100010.
Analysis
Time Complexity: O(nlogn)O(n \log n), due to priority queue operations.
Space Complexity: O(n)O(n), for the tree structure.
Huffman coding is optimal for lossless data compression.
Prim’s Algorithm
Algorithm Design
Find the MST of a graph by growing a tree one edge at a time:
1. Start with any vertex and initialize an empty MST.
2. Select the smallest edge that connects a vertex in the MST to a vertex outside it.
3. Repeat until all vertices are included.
Example Execution
Graph:
Vertices: {A, B, C, D}
Edges: {(A, B, 1), (A, C, 3), (B, C, 3), (B, D, 6), (C, D, 4)}
Start: MST = {A}, add edge (A, B, 1).
Add edge (A, C, 3).
Add edge (C, D, 4).
Analysis
Time Complexity: O(ElogV)O(E \log V), with a priority queue.
Space Complexity: O(V+E)O(V + E).
Prim’s algorithm ensures the MST is constructed greedily.
Conclusion
The Greedy Algorithm approach:
Efficient for Optimization Problems: Solves problems step-by-step with a locally
optimal choice.
Widely Used in Graph Theory: MSTs, shortest paths, and coding.
Not Always Optimal: Requires proof that a greedy choice leads to the global optimum
(e.g., activity selection, Huffman coding).
Simplicity: Often easier to implement and faster than dynamic programming or divide-
and-conquer approaches.
(iv) Dynamic Programming
Introduction to Dynamic Programming
Dynamic programming (DP) is a method for solving problems by breaking them into smaller
subproblems, solving each subproblem once, and storing the solutions to avoid redundant
computation. It is especially effective for problems with overlapping subproblems and optimal
substructure properties.
Applications of Dynamic Programming
Fibonacci Sequence: Efficient computation of Fibonacci numbers.
0/1 Knapsack Problem: Optimizing the selection of items under capacity constraints.
Longest Common Subsequence (LCS): Finding the length of the longest sequence common to
two strings.
Matrix Chain Multiplication: Minimizing the number of scalar multiplications.
Bellman-Ford Algorithm: Solving the single-source shortest path problem in graphs.
0/1 Knapsack Problem
Algorithm Design
Given n items with weights w1,w2,w3, …..w n, values v1,v2,…,vnv_1, v_2, …v_n, and a
knapsack with capacity W, find the maximum value of items that can be included without
exceeding the capacity.
1. Create a DP table dp[i][w]dp[i][w], where dp[i][w]dp[i][w] represents the
maximum value that can be achieved with the first ii items and capacity ww.
2. Fill the table:
Analysis
Time Complexity: O(nW)O(nW), where nn is the number of items and WW is the capacity.
Space Complexity: O(nW)O(nW), or O(W)O(W) with space optimization.
Longest Common Subsequence (LCS)
Algorithm Design
Example Execution
Analysis
Time Complexity: O(m⋅n)O(m \cdot n), where mm and nn are the lengths of the strings.
Space Complexity: O(m⋅n)O(m \cdot n), or O(min(m,n))O(\min(m, n)) with space optimization.
Fibonacci Sequence
Algorithm Design
1. Create an array dp[]dp[] to store Fibonacci values.
2. Base cases: dp[0]=0dp[0] = 0, dp[1]=1dp[1] = 1.
3. Fill the array: dp[i]=dp[i−1]+dp[i−2]dp[i] = dp[i-1] + dp[i-2].
Example Execution
Input: n=7n = 7
Output: dp=[0,1,1,2,3,5,8,13]dp = [0, 1, 1, 2, 3, 5, 8, 13].
Analysis
Time Complexity: O(n)O(n).
Space Complexity: O(n)O(n), or O(1)O(1) with space optimization.
Conclusion
The Dynamic Programming approach:
Key Strengths: Efficiently solves problems with overlapping subproblems and optimal
substructure.
Wide Applicability: Common in optimization problems like knapsack, LCS, and shortest paths.
Trade-off: Requires additional memory for storing intermediate results.
Performance: Offers significant improvement over brute-force solutions for many problems.
(v) Branch and Bound
Introduction to Branch and Bound
Branch and Bound (B&B) is an algorithm design paradigm used to solve optimization problems
by systematically exploring all possible solutions while pruning suboptimal or infeasible
solutions. It is commonly applied to problems where a feasible solution must satisfy constraints
and optimize an objective function.
Applications of Branch and Bound
0/1 Knapsack Problem: Finding the optimal selection of items to maximize value within
a weight limit.
Travelling Salesman Problem (TSP): Determining the shortest route to visit all cities
and return to the origin.
Assignment Problem: Minimizing the cost of assigning tasks to workers.
Integer Programming: Solving linear programs with integer constraints.
0/1 Knapsack Problem
Algorithm Design
Given n items with weights w1,w2,…,wnw_1, w_2, …. wn, values v1,v2,…,vnv_1, v_2, …vn
and a knapsack with capacity WW, find the maximum value subset.
1. Branching: Split the solution space by including or excluding an item.
2. Bounding: Calculate upper bounds for each branch. If a branch cannot lead to a better
solution than the current best, prune it.
3. Selection: Explore branches based on their bound values, often using a priority queue.
4. Termination: Stop when all branches have been explored or pruned.
Example Execution
Items: [(Weight: 2, Value: 40), (Weight: 3, Value: 50), (Weight: 4, Value: 60)]
Capacity: W=5W = 5.
Initial Bound: Solve a fractional knapsack for the upper bound.
Branch: Consider including or excluding each item.
Prune: Eliminate branches where the total weight exceeds WW or the bound is less than
the current solution.
Travelling Salesman Problem (TSP)
Algorithm Design
Given nn cities and a cost matrix, find the shortest route visiting all cities exactly once and
returning to the origin.
1. Branching: Split the solution space by visiting a specific city next.
2. Bounding: Calculate lower bounds for each partial tour using heuristics (e.g., Minimum
Spanning Tree).
3. Selection: Explore branches with the smallest lower bound first.
4. Termination: Stop when all cities are visited and the path is complete.
Example Execution
Cities: A, B, C, D
Cost Matrix:
Start from city A.
Branch to cities B, C, and D.
Prune branches where the partial cost exceeds the current shortest path.
Continue until all branches are explored.
Assignment Problem
Algorithm Design
Given nn tasks and nn workers with a cost matrix C[i][j]C[i][j], minimize the total cost by
assigning each task to a worker.
1. Branching: Assign a task to a worker.
2. Bounding: Use the Hungarian Method or row/column reductions to calculate lower
bounds.
3. Selection: Choose the branch with the smallest bound for further exploration.
4. Termination: Stop when all tasks are assigned.
Analysis
Advantages
Guarantees finding the optimal solution.
Prunes unnecessary branches, reducing computation.
Disadvantages
High time complexity for large problem sizes.
Requires memory for maintaining the search tree.
Conclusion
The Branch and Bound approach:
Strengths: Guarantees an optimal solution by systematically exploring the search space.
Applicability: Effective for combinatorial optimization problems.
Limitations: Computationally expensive for large inputs due to the exponential growth
of the solution space.