Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views8 pages

Why Recursion Tree Is Better

The document discusses various algorithmic strategies including Divide and Conquer, Greedy, Dynamic Programming, and Backtracking, highlighting their characteristics and complexities. It explains the importance of asymptotic notation for analyzing algorithm efficiency and compares different algorithms like Quick Sort and Merge Sort. Additionally, it covers the Union-Find data structure and its optimizations, emphasizing the significance of choosing the right approach based on problem requirements.

Uploaded by

Argha Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views8 pages

Why Recursion Tree Is Better

The document discusses various algorithmic strategies including Divide and Conquer, Greedy, Dynamic Programming, and Backtracking, highlighting their characteristics and complexities. It explains the importance of asymptotic notation for analyzing algorithm efficiency and compares different algorithms like Quick Sort and Merge Sort. Additionally, it covers the Union-Find data structure and its optimizations, emphasizing the significance of choosing the right approach based on problem requirements.

Uploaded by

Argha Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Why Recursion Tree is better: The recursion tree method is often 2.

Divide and Conquer Algorithm


more intuitive for complex recurrences because it visually breaks It breaks a problem into smaller sub-problems, solves them
down the work at each level, making it easier to sum up costs, recursively, and then combines their results.
especially for recurrences with multiple terms. The substitution
method requires guessing the solution and proving it via 3. Greedy Algorithm
induction, which can be harder for non-uniform splits like this. Makes locally optimal choices at each step, hoping to find a
global optimum.
 Recursion Tree gives a visual breakdown of the recurrence.
4. Dynamic Programming Algorithm
 Helps estimate total work done across levels. Solves complex problems by breaking them into overlapping
subproblems and storing the results to avoid recomputation.
 Substitution method requires guesswork.
5. Backtracking Algorithm
MASTER THEORM Builds solutions incrementally and abandons a path as soon
as it determines it's not viable.

6. Branch and Bound Algorithm


Efficiently explores solution spaces by eliminating branches
that cannot yield better solutions than the current best.

Definition of Asymptotic Notation

Asymptotic notation describes the running time of an algorithm


for large input sizes.
It focuses on the growth rate and ignores constants and lower-
order terms. It gives a standard way to compare algorithm
efficiency.
It is used regardless of hardware or compiler details.

1. Big-O Notation (O) – Upper Bound


The Master Theorem provides a way to find the time complexity
of divide-and-conquer recurrences of the form T(n)=aT(n/b)+f(n), Represents the worst-case time or space complexity.
where a≥1 (number of subproblems), b>1 (factor by which the It defines the maximum time an algorithm can take.
problem size is reduced), and f(n) (cost outside the recursive
calls).

Algorithm – Definition, Characteristics, and Categories

An algorithm is a step-by-step procedure or a finite sequence of


instructions used to solve a specific problem or perform a
computation.

Characteristics of an Algorithm
2. Omega Notation (Ω) – Lower Bound
1. Input
An algorithm must have zero or more well-defined inputs Represents the best-case time or space complexity.
provided externally before or during execution. It gives the minimum time an algorithm takes on input.
2. Output
It must produce at least one output that is the expected
result after processing the inputs.

3. Finiteness
The algorithm must terminate after a finite number of steps;
it cannot run infinitely.

4. Definiteness
Every step of the algorithm must be clearly and
unambiguously defined to avoid confusion.
3. Theta Notation (Θ) – Tight Bound
5. Effectiveness
All operations should be basic enough to be performed Shows both upper and lower bounds for an algorithm.
exactly and within a finite amount of time. It represents the average-case or exact growth.

Categories of Algorithms

1. Brute Force Algorithm


It tries all possible solutions until it finds the correct one; 🔹 Significance
simple but often inefficient. of Asymptotic
Notation
1. It gives machine-independent results for fair comparison.
This helps judge algorithm efficiency without running the
code.

2. It simplifies analysis by ignoring irrelevant constants.


Only the dominating term affecting growth is considered.

3. Helps in selecting efficient algorithms for big data.


You can compare scalability directly by looking at growth
rates.

What is Tail Recursion?

1. Tail recursion happens when the function calls itself at the Brute Force Approach:
very end.
 Definition: The brute force approach solves problems by
2. Nothing is done after the function calls itself. trying all possible solutions until it finds the best one.

3. It uses less memory because the computer doesn't need to  How it works: It checks every possible option, so it’s very
remember past steps. simple to implement but can be slow for large problems.

4. Tail recursion works faster in many cases than normal  Example:


recursion.
o Problem: Find the largest number in a list.
5. It is useful for writing clean and memory-efficient programs.
o Approach: Go through every number in the list and keep
track of the largest one.

o Time Complexity: O(n), where n is the number of numbers


in the list.

 Pros:
Example: o Easy to understand and implement.
If a function prints numbers from n to 1 and the last thing it does
is call itself again, it's called tail recursion. After calling itself, it o Works well for small or simple problems.
does nothing else.
 Cons:
How to Analyze an Algorithm and Describe Its Complexity
o Very slow for large input sizes.
1. Choose Input Size (n):
o Can be inefficient and time-consuming when compared to
First, find out what input the algorithm takes, like the
other methods.
number of elements in an array.
2. Greedy Approach:
2. Count Basic Operations:
Check how many times key steps (like loops, comparisons)  Definition: The greedy approach solves problems by making
are repeated. the best choice at each step without worrying about the
consequences of earlier decisions.
3. Write a Function (f(n)):
Express the total steps as a function of input size, like f(n) =  How it works: It makes quick decisions hoping that these
n, n², etc. local good choices will lead to a globally good solution.
4. Use Asymptotic Notation:  Example:
Describe how the function grows using Big-O, Big-Theta, or
Big-Omega. o Problem: Coin Change Problem (given coins of 1, 5, and 10,
make change for a given amount).
Complexity of an Algorithm
o Approach: Always choose the highest denomination coin
1. Time Complexity:
that does not exceed the remaining amount.
It shows how the running time of an algorithm increases with input
size. o Time Complexity: O(n), where n is the number of coin
Example: If an algorithm takes 5 steps for input size 5, and 10 steps for size
10, it's linear → O(n). denominations.

2. Space Complexity:  Pros:


It tells how much memory the algorithm uses as input size grows.
Example: If it uses memory equal to the size of input, it is O(n); if it uses o Very efficient and fast for some problems.
fixed memory, it's O(1).
o Simple to implement and understand.

 Cons:

o Does not always lead to the optimal solution for all


problems.
o May make decisions that seem best locally but are not  Locally Optimal Choices: It makes the best possible choice at
globally optimal. each step, without considering the global picture.

3. Dynamic Programming (DP):  Fast and Efficient: The greedy approach is typically faster
than other methods, as it avoids unnecessary computations.
 Definition: Dynamic Programming breaks a problem into
smaller subproblems, solves them, and stores the results to  Not Always Optimal: It does not guarantee the best solution
avoid solving the same subproblem multiple times. for all problems, because local optimal choices do not always
lead to the global optimal solution.
 How it works: It saves time by remembering previously
solved subproblems (using memoization or tabulation). 3. Dynamic Programming (DP):

 Example:  Overlapping Subproblems: It breaks down a problem into


smaller subproblems, and these subproblems overlap,
o Problem: Fibonacci Sequence (find the nth number in the meaning the same subproblem may be solved multiple
Fibonacci sequence). times.
o Approach: Instead of recalculating the Fibonacci numbers  Optimal Substructure: The solution to the problem can be
repeatedly, store the results of smaller Fibonacci numbers and constructed from the solutions of its subproblems, ensuring
use them to find larger ones. that optimal decisions are made.
o Time Complexity: O(n), because each Fibonacci number is  Memory Intensive: It requires extra space to store solutions
calculated once. to subproblems, which may increase memory usage,
especially for large problems.
 Pros:
The Union-Find Algorithm explained in points:
o Extremely efficient for problems with overlapping subproblems.
1. Definition: The Union-Find (or Disjoint Set Union, DSU) is a
o Guarantees an optimal solution by considering all possibilities.
data structure that helps manage a collection of disjoint sets
 Cons: and supports two main operations:

o Requires extra memory to store intermediate results. o Find: Determines the set a particular element belongs to.

o Can be complex to implement, especially for large problems. o Union: Merges two sets into one.

comparison 2. Optimizations:

Time Space o Path Compression: During Find, it makes the tree flatter by
Approach When to Use linking each node directly to the root, speeding up future
Complexity Complexity
operations.
Small problems or
Brute Force High (slow) Low (simple) o Union by Rank/Size: During Union, it attaches the smaller
simple tasks
tree under the root of the larger tree to keep the structure
When local choices balanced and efficient.
Greedy Fast (O(n)) Low (O(1)) lead to an optimal
3. Time Complexity: Both Find and Union operations are nearly
solution
constant time, i.e., O(α(n)), where α is the inverse
When a problem has Ackermann function, which grows very slowly and is
overlapping practically constant for all reasonable input sizes.
Dynamic Medium subproblems, like
Fast (O(n)) 4. Example:
Programming (O(n)) Fibonacci
o Union(1, 2) → {1, 2}

o Union(3, 4) → {3, 4}
CHARACTERISTICS:-
o Union(2, 3) → {1, 2, 3, 4}
Brute Force Approach:
o Find(4) → 1 (since 4 is in the set containing 1).
 Exhaustive Search: It checks all possible solutions to find the
correct one. There is no shortcut, and it explores every 5. Applications:
option.
o Kruskal’s Algorithm for finding Minimum Spanning Trees
 Simple Implementation: It is easy to implement as it doesn't (MST).
involve complex algorithms or data structures.
o Network Connectivity problems where you check if two
 Inefficient for Large Inputs: The time complexity increases elements are connected.
rapidly as the size of the input grows, making it impractical
for larger problems. o Percolation problems in grids or networks.

2. Greedy Approach:
Critically comment on greedy strategy does not work for the 0-1 🔸 Explanation:
knapsack problem for all time.
Quick Sort is also a Divide and Conquer algorithm. It selects a
 Greedy Strategy in 0-1 Knapsack: pivot element, partitions the array into two (less than and
greater than pivot), and recursively sorts the sub-arrays. It is in-
The greedy method picks items based on the highest value-to- place and faster in practice, but can degrade to O(n²) if the pivot
weight ratio, trying to get the most value for the least weight. choice is poor (e.g., already sorted arrays with no
randomization).
 Why It Fails:
🔸 Algorithm (5 Steps):
The greedy method doesn’t always find the best solution because
it looks at each item individually and ignores how items might 1. Select a pivot element.
work better together.
2. Partition the array such that elements < pivot go to the left, >
 Example: pivot to the right.
Suppose we have three items (A, B, and C). The greedy method 3. Recursively quick sort the left sub-array.
might choose items A and B, but the best choice is actually items
B and C, giving a higher total value. 4. Recursively quick sort the right sub-array.

 Reason for Failure: 5. Combine the results to get the sorted array.

Greedy makes the best choice for one item at a time but misses 🔸 Time Complexity:
the overall best combination of items.  Best Case: O(n log n) — when pivot divides the array evenly.
 Conclusion:  Worst Case: O(n²) — when pivot is smallest/largest
The greedy strategy doesn’t always work for the 0-1 Knapsack (unbalanced partitions).
problem. To find the best solution, we need to use methods like 🔸 Space Complexity:
dynamic programming.
 O(log n) (average) — for recursion stack.
✅ 1. Merge Sort
 O(n) (worst case).
🔸 Explanation:
🔸 Stability:
Merge Sort is a Divide and Conquer algorithm. It divides the
array into two halves, recursively sorts each half, and merges  Not stable by default.
them in sorted order. It performs consistently well on all types of
input (sorted, unsorted, reverse). However, it uses extra space ✅ 3. Comparison Table:
for merging. Feature Merge Sort Quick Sort
🔸 Algorithm (5 Steps):
Time
O(n log n) / O(n log n) O(n log n) / O(n²)
1. Divide the array into two halves. (Best/Worst)

2. Recursively apply merge sort to each half. Space Complexity O(n) O(log n) (avg)

3. Merge the two sorted halves using a temporary array. In-place? No Yes
4. Repeat the process until all elements are merged. Stable? Yes No
5. Return the sorted array. Performance Slower (due to space) Faster in practice
🔸 Time Complexity:
Linked Lists, stability- Arrays, general
Suitable for
 Best Case: O(n log n) required purpose

 Worst Case: O(n log n) 🔷 1. Merge Sort – Time Complexity Analysis

 Reason: Always divides into two equal halves, and merging 🔸 Approach:
takes linear time.
Divide the array into two halves, sort recursively, and merge.
🔸 Space Complexity:
🔸 Work done at each level:
 O(n): due to extra space required for merging.
 Each level merges n elements → O(n)
🔸 Stability:
 Number of levels = log₂n (as the array is divided by 2 at each
 Stable: Maintains the order of equal elements. step)

🔸 Total Time:

 T(n) = 2T(n/2) + O(n)

✅ 2. Quick Sort
 Solving this gives: Divide and
Point Dynamic Programming
🔹 T(n) = O(n log n) Conquer
✅ Final Complexity: leading to higher
time complexity.
Case Complexity
Typically requires
Best Case O(n log n)
less memory Requires extra space for
Average Case O(n log n) 5. Storage Use (uses recursion tables
stack). (memoization/tabulation).
Worst Case O(n log n)

🔸 Why consistent?
UNION BY RANK
 Merge operation always takes O(n)
Union by Rank is a technique used in the Disjoint Set (Union-
 Recursion depth is always log n, regardless of input order Find) data structure to keep the tree structure small and
efficient.
🔷 2. Quick Sort – Time Complexity Analysis
 Each set is represented as a tree, and each node has a rank,
🔸 Approach: which is a rough measure of the tree’s height.
Pick a pivot, partition the array, and recursively sort subarrays.  During a union operation, the root with the lower rank is
🔸 Best Case: attached to the root with the higher rank to avoid increasing the
height unnecessarily.
 Pivot divides array into two equal parts
 If both roots have the same rank, one root is made the parent
 Recursion depth = log n and its rank increases by 1.

 Work at each level = O(n)  This technique improves efficiency of find() and union()
operations, especially when combined with path compression,
 🔹 T(n) = O(n log n) making them run in nearly constant time O(α(n)).
🔸 Worst Case: We have 4 elements: {1, 2, 3, 4}
 Pivot always smallest or largest element Each is in its own set initially.

 One side has n-1 elements, other has 0 Step 1: Initial State

 🔹 T(n) = T(n-1) + O(n) Element: 1 2 3 4

 🔹 Solves to: O(n²) Parent: 1 2 3 4

🔸 Average Case: Rank: 0 0 0 0

 Random pivot on average gives balanced split Step 2: Union(1, 2)

 🔹 T(n) ≈ O(n log n) Ranks are equal → make 1 the parent, and increase its rank.

https://chatgpt.com/share/68130767-ab58-8008-a51a- Parent: 1 1 3 4
24a6fcb7fee0 analysis of algorithm merge and quick sprt Rank: 1 0 0 0
Divide and Step 3: Union(3, 4)
Point Dynamic Programming
Conquer
Again, ranks are equal → make 3 the parent, increase its rank.
Breaks problem
into independent Breaks problem into Parent: 1 1 3 3
1. Approach subproblems, overlapping subproblems and
Rank: 1 0 1 0
solves them solves each once.
recursively. Step 4: Union(1, 3)

Usually does not Reuses subproblem results Ranks are equal → make 1 the parent, increase its rank.
2. Overlapping
reuse subproblem using memoization or
Subproblems Parent: 1 1 1 3
results. tabulation.
Rank: 2 0 1 0
Merge Sort, Quick
Matrix Chain Multiplication,
3. Examples Sort, Binary ✅ Question 1: Max Calculation using Divide-and-Conquer – 5
Fibonacci, Knapsack.
Search. Points

4. Efficiency May recompute Stores intermediate results, 1. Approach: This method splits the array into two halves
subproblems avoids recomputation, more recursively and finds the maximum of each half, then returns
multiple times, efficient. the greater of the two.
2. Efficiency: It reduces the number of total comparisons Job Profit Deadline
slightly and is efficient when implemented with parallel
processing or in recursive environments.

3. Time Complexity: T(n) = 2T(n/2) + 1 → solved using Master B 19 1


Theorem gives O(n).
C 27 2
4. Space Complexity: Because it uses recursion, it takes O(log
n) space on the call stack. D 25 1
5. When Better: It is better for large datasets or parallel Steps:
computation, but may add recursion overhead in simple,
small inputs. 1. Sort by profit → A(100), C(27), D(25), B(19)

✅ Question 2: Max Calculation using Normal/Linear Approach – 2. Assign jobs:


5 Points
 A → deadline 2 → slot 2 is free → assign
1. Approach: It linearly scans the array from start to end,
 C → deadline 2 → slot 1 is free → assign
updating the max when a larger element is found.
 D → deadline 1 → slot 1 is taken → skip
2. Simplicity: Very simple, efficient for small arrays or where
memory use must be minimal.  B → deadline 1 → slot 1 is taken → skip
3. Time Complexity: It does (n – 1) comparisons → O(n) time. Final Schedule (like week slots):
4. Space Complexity: Only uses a few variables, so O(1) space. Day Job
5. When Better: It is better for small arrays or memory-limited 1 C
environments since it avoids recursion.
2 A
🎯 Which One is Better and Why?
✅ Total Profit = 100 (A) + 27 (C) = 127
 Divide-and-Conquer is better when dealing with large
datasets or using parallel systems, where splitting the work 🔹 Why Is It Considered Easy?
helps improve performance.
 It follows a greedy strategy.
 Linear approach is better for small to medium datasets due
to lower memory usage and no recursion overhead.  You only need to sort and find available slots.

 Overall, both have O(n) time complexity, but the normal  The logic is simple, and it doesn't need complex recursion or
approach is preferred in most practical cases unless there's DP.
a need for parallelism.
🔹 Time Complexity
Job Sequencing with Deadlines
 Sorting the jobs by profit → O(n log n)
IT is a greedy algorithm problem.
 Placing each job in a slot → worst case O(n) for each job
 A list of jobs, each with a profit and a deadline (the last time → Total: O(n^2)
slot by which it must be completed).
 With Disjoint Set Union (DSU) optimization for slots → can
 Each job takes 1 unit of time. be improved to O(n log n)

 Your goal is to schedule jobs in such a way that total profit


is maximized, and no two jobs overlap.
🔹 How Does It Work?

1. Sort all jobs by descending order of profit.

2. Create a timeline (like a weekly calendar) with slots equal to the


number of jobs.

3. For each job, try to place it in the latest available slot before its
deadline

4. If the slot is free, assign the job there; otherwise, skip the
job.

🔹 Example (Weekly Style)

Suppose you have 4 jobs (A, B, C, D): Algorithm for 8-Queen Problem (Backtracking):

Job Profit Deadline 1. Start from row 0.

o We need to place one queen per row.


A 100 2
2. For each column in the current row: 1. ✅ Start by setting the distance of all vertices to infinity,
except the source, which is set to 0.
o Try placing the queen at (row, col).
2. 📌 Repeat the next step V-1 times (where V is the number of
3. Check if placing the queen at (row, col) is safe: vertices).
o Ensure there are no other queens in the same column. 3. 🔁 For every edge in the graph, check if the path from the
o Ensure there are no other queens in the same diagonal. source to the destination through that edge is shorter.

4. ✍️If a shorter path is found, update the distance to that


o If it is safe, continue.
vertex.
4. If safe, place the queen and move to the next row.
5. 🚨 After V-1 loops, check once more: if any edge can still give
5. Recursively solve for the next row. a shorter path, there is a negative weight cycle.

6. Backtrack if no safe position is found: 6. 🎯 The final distance array now has the shortest distances
from the source to all other nodes.
o If no position is found in a row, backtrack by removing the
queen and trying the next column. https://youtu.be/JOJK7-fM2JQ?si=_HkNMjjQnYDUe_sI matrix
chain multiplication
2️⃣ N-Queen Problem (Backtracking)
https://youtu.be/oNI0rf2P9gE?si=AzW1eEn0odMAmeva Floyd
warshall

Algorithm for N-Queen Problem (Backtracking): Heap Property

1. Start from row 0. A heap is a special type of binary tree that satisfies the heap
property. It can be either a max heap or a min heap:
o We need to place one queen per row.
 Max Heap Property: The value of each node is greater than
2. For each column in the current row: or equal to the values of its children. The maximum value is
o Try placing the queen at (row, col). at the root.

 Min Heap Property: The value of each node is less than or


3. Check if placing the queen at (row, col) is safe:
equal to the values of its children. The minimum value is at
o Ensure there are no other queens in the same column. the root.

o Ensure there are no other queens in the same diagonal. Heap Property ensures that for any given node i in the heap:

o If it is safe, continue.  For Max Heap: A[i] >= A[2i + 1] and A[i] >= A[2i + 2] (for all
valid children).
4. If safe, place the queen and move to the next row.
 For Min Heap: A[i] <= A[2i + 1] and A[i] <= A[2i + 2] (for all
5. Recursively solve for the next row. valid children).
6. Backtrack if no safe position is found: The root of the heap is either the maximum or the minimum
value in the tree, depending on whether it's a max heap or min
o If no position is found in a row, backtrack by removing the
heap.
queen and trying the next column.
Algorithm to Build a Heap (Max Heap)
Time Complexity:
1. Heapify: The process of ensuring that a subtree rooted at an
Worst-case time complexity of N-Queen using Backtracking:
index satisfies the heap property.
O(N!)
2. Building the heap: To build a heap, we start from the last
 You place 1 queen per row non-leaf node and perform a heapify operation on each
node in reverse level order.
 Try N columns in each row
Algorithm to Build a Max Heap
 But safe positions reduce due to diagonal and column
checks Input: Array A[0...n-1]
Output: Max heap represented by array A.
 Still, in worst case, it's like trying all possible
permutations → N! 1. Start from the last non-leaf node at index n//2 - 1.

✅ For 8-Queen, time complexity is O(8!) = 40320 2. For each node i from n//2 - 1 to 0, do:
possibilities
1. Call heapify(i, A) to ensure that the subtree rooted at i
Bellam ford algorithm https://youtu.be/KudAWAMiQog? satisfies the max heap property.
si=AjJHebuWDRAHR98r
Heapify (i, A):
the Bellman-Ford Algorithm in simple sentence-type steps
1. Set largest = i, left = 2i + 1, right = 2i + 2.
(6 points):
2. If left < n and A[left] > A[largest], set largest = left. Aspect Prim's Algorithm Kruskal's Algorithm
3. If right < n and A[right] > A[largest], set largest = right. Used selection. cycles.
4. If largest != i, swap A[i] and A[largest]. Selects edges from the Selects edges from the
Edge
5. Recursively call heapify (largest, A) to ensure the heap nodes already included entire graph, ensuring no
Selection
property is maintained. in the MST. cycle is formed.

Time Complexity Analysis Focuses on expanding


Focuses on adding edges
the MST from the
1. Heapify: The heapify operation on a node takes O(log n) Working from the whole graph,
starting vertex by
time because it potentially needs to traverse the height Principle ensuring that no cycles are
adding minimum weight
of the tree. formed in the MST.
edges.
2. Build Max Heap: Used for finding the
Used for sparse graphs or
MST in dense graphs
o The buildMaxHeap process involves calling heapify for each when edge weights are not
where edge weights are
node starting from n/2 - 1 to 0. Example necessarily associated with
important. Example:
proximity. Example: MST for
o The worst case time complexity for each level: MST of a connected
road connections.
network.
 At level h, there are 2^h nodes, and the depth of each node
is O(h).

 The total time complexity is approximately: O(n) (since the


sum of work done for all nodes is O(n)).

Overall Time Complexity of building the heap is O(n).

Space Complexity

 The space complexity is O(1) because the heap is


constructed in place in the array A without needing
extra space.

1. Difference Between BFS and DFS

BFS (Breadth-First DFS (Depth-First


Aspect
Search) Search)
Explores level by level,
Explores as far as
Traversal visiting all neighbors
possible along a branch
Method before moving to the
before backtracking.
next level.
Data Stack (LIFO) or
Structure Queue (FIFO) recursion (function call
Used stack)
Can find the shortest
Does not guarantee the
Pathfinding path in an unweighted
shortest path.
graph.
Explores all nodes at Explores deeper into the
Nature of the present depth level graph before
Exploration before moving on to backtracking and
nodes at the next level. exploring other branches.
Solving puzzles such as
Finding the shortest
finding a path through a
Example path in an unweighted
maze by going as deep as
graph (e.g., a maze).
possible first.
2. Difference Between Prim's and Kruskal's Algorithm

Aspect Prim's Algorithm Kruskal's Algorithm

Greedily builds the MST


Greedily selects the
by starting from a node
smallest edge that doesn’t
Approach and adding the smallest
form a cycle and adds it to
edge to the growing
the MST.
MST.

Data Uses a priority queue Uses a disjoint-set (union-


Structure (min-heap) for edge find) data structure to avoid

You might also like