Why Recursion Tree Is Better
Why Recursion Tree Is Better
Characteristics of an Algorithm
2. Omega Notation (Ω) – Lower Bound
1. Input
An algorithm must have zero or more well-defined inputs Represents the best-case time or space complexity.
provided externally before or during execution. It gives the minimum time an algorithm takes on input.
2. Output
It must produce at least one output that is the expected
result after processing the inputs.
3. Finiteness
The algorithm must terminate after a finite number of steps;
it cannot run infinitely.
4. Definiteness
Every step of the algorithm must be clearly and
unambiguously defined to avoid confusion.
3. Theta Notation (Θ) – Tight Bound
5. Effectiveness
All operations should be basic enough to be performed Shows both upper and lower bounds for an algorithm.
exactly and within a finite amount of time. It represents the average-case or exact growth.
Categories of Algorithms
1. Tail recursion happens when the function calls itself at the Brute Force Approach:
very end.
Definition: The brute force approach solves problems by
2. Nothing is done after the function calls itself. trying all possible solutions until it finds the best one.
3. It uses less memory because the computer doesn't need to How it works: It checks every possible option, so it’s very
remember past steps. simple to implement but can be slow for large problems.
Pros:
Example: o Easy to understand and implement.
If a function prints numbers from n to 1 and the last thing it does
is call itself again, it's called tail recursion. After calling itself, it o Works well for small or simple problems.
does nothing else.
Cons:
How to Analyze an Algorithm and Describe Its Complexity
o Very slow for large input sizes.
1. Choose Input Size (n):
o Can be inefficient and time-consuming when compared to
First, find out what input the algorithm takes, like the
other methods.
number of elements in an array.
2. Greedy Approach:
2. Count Basic Operations:
Check how many times key steps (like loops, comparisons) Definition: The greedy approach solves problems by making
are repeated. the best choice at each step without worrying about the
consequences of earlier decisions.
3. Write a Function (f(n)):
Express the total steps as a function of input size, like f(n) = How it works: It makes quick decisions hoping that these
n, n², etc. local good choices will lead to a globally good solution.
4. Use Asymptotic Notation: Example:
Describe how the function grows using Big-O, Big-Theta, or
Big-Omega. o Problem: Coin Change Problem (given coins of 1, 5, and 10,
make change for a given amount).
Complexity of an Algorithm
o Approach: Always choose the highest denomination coin
1. Time Complexity:
that does not exceed the remaining amount.
It shows how the running time of an algorithm increases with input
size. o Time Complexity: O(n), where n is the number of coin
Example: If an algorithm takes 5 steps for input size 5, and 10 steps for size
10, it's linear → O(n). denominations.
Cons:
3. Dynamic Programming (DP): Fast and Efficient: The greedy approach is typically faster
than other methods, as it avoids unnecessary computations.
Definition: Dynamic Programming breaks a problem into
smaller subproblems, solves them, and stores the results to Not Always Optimal: It does not guarantee the best solution
avoid solving the same subproblem multiple times. for all problems, because local optimal choices do not always
lead to the global optimal solution.
How it works: It saves time by remembering previously
solved subproblems (using memoization or tabulation). 3. Dynamic Programming (DP):
o Requires extra memory to store intermediate results. o Find: Determines the set a particular element belongs to.
o Can be complex to implement, especially for large problems. o Union: Merges two sets into one.
comparison 2. Optimizations:
Time Space o Path Compression: During Find, it makes the tree flatter by
Approach When to Use linking each node directly to the root, speeding up future
Complexity Complexity
operations.
Small problems or
Brute Force High (slow) Low (simple) o Union by Rank/Size: During Union, it attaches the smaller
simple tasks
tree under the root of the larger tree to keep the structure
When local choices balanced and efficient.
Greedy Fast (O(n)) Low (O(1)) lead to an optimal
3. Time Complexity: Both Find and Union operations are nearly
solution
constant time, i.e., O(α(n)), where α is the inverse
When a problem has Ackermann function, which grows very slowly and is
overlapping practically constant for all reasonable input sizes.
Dynamic Medium subproblems, like
Fast (O(n)) 4. Example:
Programming (O(n)) Fibonacci
o Union(1, 2) → {1, 2}
o Union(3, 4) → {3, 4}
CHARACTERISTICS:-
o Union(2, 3) → {1, 2, 3, 4}
Brute Force Approach:
o Find(4) → 1 (since 4 is in the set containing 1).
Exhaustive Search: It checks all possible solutions to find the
correct one. There is no shortcut, and it explores every 5. Applications:
option.
o Kruskal’s Algorithm for finding Minimum Spanning Trees
Simple Implementation: It is easy to implement as it doesn't (MST).
involve complex algorithms or data structures.
o Network Connectivity problems where you check if two
Inefficient for Large Inputs: The time complexity increases elements are connected.
rapidly as the size of the input grows, making it impractical
for larger problems. o Percolation problems in grids or networks.
2. Greedy Approach:
Critically comment on greedy strategy does not work for the 0-1 🔸 Explanation:
knapsack problem for all time.
Quick Sort is also a Divide and Conquer algorithm. It selects a
Greedy Strategy in 0-1 Knapsack: pivot element, partitions the array into two (less than and
greater than pivot), and recursively sorts the sub-arrays. It is in-
The greedy method picks items based on the highest value-to- place and faster in practice, but can degrade to O(n²) if the pivot
weight ratio, trying to get the most value for the least weight. choice is poor (e.g., already sorted arrays with no
randomization).
Why It Fails:
🔸 Algorithm (5 Steps):
The greedy method doesn’t always find the best solution because
it looks at each item individually and ignores how items might 1. Select a pivot element.
work better together.
2. Partition the array such that elements < pivot go to the left, >
Example: pivot to the right.
Suppose we have three items (A, B, and C). The greedy method 3. Recursively quick sort the left sub-array.
might choose items A and B, but the best choice is actually items
B and C, giving a higher total value. 4. Recursively quick sort the right sub-array.
Reason for Failure: 5. Combine the results to get the sorted array.
Greedy makes the best choice for one item at a time but misses 🔸 Time Complexity:
the overall best combination of items. Best Case: O(n log n) — when pivot divides the array evenly.
Conclusion: Worst Case: O(n²) — when pivot is smallest/largest
The greedy strategy doesn’t always work for the 0-1 Knapsack (unbalanced partitions).
problem. To find the best solution, we need to use methods like 🔸 Space Complexity:
dynamic programming.
O(log n) (average) — for recursion stack.
✅ 1. Merge Sort
O(n) (worst case).
🔸 Explanation:
🔸 Stability:
Merge Sort is a Divide and Conquer algorithm. It divides the
array into two halves, recursively sorts each half, and merges Not stable by default.
them in sorted order. It performs consistently well on all types of
input (sorted, unsorted, reverse). However, it uses extra space ✅ 3. Comparison Table:
for merging. Feature Merge Sort Quick Sort
🔸 Algorithm (5 Steps):
Time
O(n log n) / O(n log n) O(n log n) / O(n²)
1. Divide the array into two halves. (Best/Worst)
2. Recursively apply merge sort to each half. Space Complexity O(n) O(log n) (avg)
3. Merge the two sorted halves using a temporary array. In-place? No Yes
4. Repeat the process until all elements are merged. Stable? Yes No
5. Return the sorted array. Performance Slower (due to space) Faster in practice
🔸 Time Complexity:
Linked Lists, stability- Arrays, general
Suitable for
Best Case: O(n log n) required purpose
Reason: Always divides into two equal halves, and merging 🔸 Approach:
takes linear time.
Divide the array into two halves, sort recursively, and merge.
🔸 Space Complexity:
🔸 Work done at each level:
O(n): due to extra space required for merging.
Each level merges n elements → O(n)
🔸 Stability:
Number of levels = log₂n (as the array is divided by 2 at each
Stable: Maintains the order of equal elements. step)
🔸 Total Time:
✅ 2. Quick Sort
Solving this gives: Divide and
Point Dynamic Programming
🔹 T(n) = O(n log n) Conquer
✅ Final Complexity: leading to higher
time complexity.
Case Complexity
Typically requires
Best Case O(n log n)
less memory Requires extra space for
Average Case O(n log n) 5. Storage Use (uses recursion tables
stack). (memoization/tabulation).
Worst Case O(n log n)
🔸 Why consistent?
UNION BY RANK
Merge operation always takes O(n)
Union by Rank is a technique used in the Disjoint Set (Union-
Recursion depth is always log n, regardless of input order Find) data structure to keep the tree structure small and
efficient.
🔷 2. Quick Sort – Time Complexity Analysis
Each set is represented as a tree, and each node has a rank,
🔸 Approach: which is a rough measure of the tree’s height.
Pick a pivot, partition the array, and recursively sort subarrays. During a union operation, the root with the lower rank is
🔸 Best Case: attached to the root with the higher rank to avoid increasing the
height unnecessarily.
Pivot divides array into two equal parts
If both roots have the same rank, one root is made the parent
Recursion depth = log n and its rank increases by 1.
Work at each level = O(n) This technique improves efficiency of find() and union()
operations, especially when combined with path compression,
🔹 T(n) = O(n log n) making them run in nearly constant time O(α(n)).
🔸 Worst Case: We have 4 elements: {1, 2, 3, 4}
Pivot always smallest or largest element Each is in its own set initially.
One side has n-1 elements, other has 0 Step 1: Initial State
🔹 T(n) ≈ O(n log n) Ranks are equal → make 1 the parent, and increase its rank.
https://chatgpt.com/share/68130767-ab58-8008-a51a- Parent: 1 1 3 4
24a6fcb7fee0 analysis of algorithm merge and quick sprt Rank: 1 0 0 0
Divide and Step 3: Union(3, 4)
Point Dynamic Programming
Conquer
Again, ranks are equal → make 3 the parent, increase its rank.
Breaks problem
into independent Breaks problem into Parent: 1 1 3 3
1. Approach subproblems, overlapping subproblems and
Rank: 1 0 1 0
solves them solves each once.
recursively. Step 4: Union(1, 3)
Usually does not Reuses subproblem results Ranks are equal → make 1 the parent, increase its rank.
2. Overlapping
reuse subproblem using memoization or
Subproblems Parent: 1 1 1 3
results. tabulation.
Rank: 2 0 1 0
Merge Sort, Quick
Matrix Chain Multiplication,
3. Examples Sort, Binary ✅ Question 1: Max Calculation using Divide-and-Conquer – 5
Fibonacci, Knapsack.
Search. Points
4. Efficiency May recompute Stores intermediate results, 1. Approach: This method splits the array into two halves
subproblems avoids recomputation, more recursively and finds the maximum of each half, then returns
multiple times, efficient. the greater of the two.
2. Efficiency: It reduces the number of total comparisons Job Profit Deadline
slightly and is efficient when implemented with parallel
processing or in recursive environments.
Overall, both have O(n) time complexity, but the normal The logic is simple, and it doesn't need complex recursion or
approach is preferred in most practical cases unless there's DP.
a need for parallelism.
🔹 Time Complexity
Job Sequencing with Deadlines
Sorting the jobs by profit → O(n log n)
IT is a greedy algorithm problem.
Placing each job in a slot → worst case O(n) for each job
A list of jobs, each with a profit and a deadline (the last time → Total: O(n^2)
slot by which it must be completed).
With Disjoint Set Union (DSU) optimization for slots → can
Each job takes 1 unit of time. be improved to O(n log n)
3. For each job, try to place it in the latest available slot before its
deadline
4. If the slot is free, assign the job there; otherwise, skip the
job.
Suppose you have 4 jobs (A, B, C, D): Algorithm for 8-Queen Problem (Backtracking):
6. Backtrack if no safe position is found: 6. 🎯 The final distance array now has the shortest distances
from the source to all other nodes.
o If no position is found in a row, backtrack by removing the
queen and trying the next column. https://youtu.be/JOJK7-fM2JQ?si=_HkNMjjQnYDUe_sI matrix
chain multiplication
2️⃣ N-Queen Problem (Backtracking)
https://youtu.be/oNI0rf2P9gE?si=AzW1eEn0odMAmeva Floyd
warshall
1. Start from row 0. A heap is a special type of binary tree that satisfies the heap
property. It can be either a max heap or a min heap:
o We need to place one queen per row.
Max Heap Property: The value of each node is greater than
2. For each column in the current row: or equal to the values of its children. The maximum value is
o Try placing the queen at (row, col). at the root.
o Ensure there are no other queens in the same diagonal. Heap Property ensures that for any given node i in the heap:
o If it is safe, continue. For Max Heap: A[i] >= A[2i + 1] and A[i] >= A[2i + 2] (for all
valid children).
4. If safe, place the queen and move to the next row.
For Min Heap: A[i] <= A[2i + 1] and A[i] <= A[2i + 2] (for all
5. Recursively solve for the next row. valid children).
6. Backtrack if no safe position is found: The root of the heap is either the maximum or the minimum
value in the tree, depending on whether it's a max heap or min
o If no position is found in a row, backtrack by removing the
heap.
queen and trying the next column.
Algorithm to Build a Heap (Max Heap)
Time Complexity:
1. Heapify: The process of ensuring that a subtree rooted at an
Worst-case time complexity of N-Queen using Backtracking:
index satisfies the heap property.
O(N!)
2. Building the heap: To build a heap, we start from the last
You place 1 queen per row non-leaf node and perform a heapify operation on each
node in reverse level order.
Try N columns in each row
Algorithm to Build a Max Heap
But safe positions reduce due to diagonal and column
checks Input: Array A[0...n-1]
Output: Max heap represented by array A.
Still, in worst case, it's like trying all possible
permutations → N! 1. Start from the last non-leaf node at index n//2 - 1.
✅ For 8-Queen, time complexity is O(8!) = 40320 2. For each node i from n//2 - 1 to 0, do:
possibilities
1. Call heapify(i, A) to ensure that the subtree rooted at i
Bellam ford algorithm https://youtu.be/KudAWAMiQog? satisfies the max heap property.
si=AjJHebuWDRAHR98r
Heapify (i, A):
the Bellman-Ford Algorithm in simple sentence-type steps
1. Set largest = i, left = 2i + 1, right = 2i + 2.
(6 points):
2. If left < n and A[left] > A[largest], set largest = left. Aspect Prim's Algorithm Kruskal's Algorithm
3. If right < n and A[right] > A[largest], set largest = right. Used selection. cycles.
4. If largest != i, swap A[i] and A[largest]. Selects edges from the Selects edges from the
Edge
5. Recursively call heapify (largest, A) to ensure the heap nodes already included entire graph, ensuring no
Selection
property is maintained. in the MST. cycle is formed.
Space Complexity