Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views44 pages

1 - Tools of Algorithm Analysis

Uploaded by

Ahmed Jaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views44 pages

1 - Tools of Algorithm Analysis

Uploaded by

Ahmed Jaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

🔧 1.

Asymptotic Analysis

 Chapter 3: Characterizing Running Times


o Section 3.1: O-notation, Θ-notation, and Ω-notation
o Section 3.2: Asymptotic notation: formal definitions
o Section 3.3: Standard notations and common functions
o Pages: 49–75

2. Solving Recurrence Relations

 Chapter 4: Divide-and-Conquer
o Section 4.3: The substitution method for solving recurrences
o Section 4.4: The recursion-tree method for solving recurrences
o Section 4.5: The master method for solving recurrences
o Section 4.6: Proof of the continuous master theorem
(optional/advanced)
o Section 4.7: Akra-Bazzi recurrences (optional/advanced)
o Pages: 76–125

3. Time and Space Complexity

 Chapter 2: Getting Started


o Section 2.2: Analyzing algorithms (introduces time complexity)
o Section 2.3: Designing algorithms
o Pages: 25–48
 Chapter 3 (above) — also deepens the analysis of time complexity
 Appendix A (summations): foundational for time complexity analysis
o Pages: 1140–1152
 Appendix D (matrices): used in space/time analysis in some algorithms
o Pages: 1214–1226

4. Amortized Analysis

 Chapter 16: Amortized Analysis


o Section 16.1: Aggregate analysis
o Section 16.2: The accounting method
o Section 16.3: The potential method
o Section 16.4: Dynamic tables
o Pages: 448–476

Summary Table

Topic Chapter(s) Page Range


Asymptotic Analysis Ch. 3 49–75
Recurrence Relations Ch. 4 76–125
Time/Space Complexity Ch. 2–3, A 25–75, 1140–52
Amortized Analysis Ch. 16 448–476
🔧 Topic 1: Asymptotic Analysis
✅ Step-by-Step Explanation

Asymptotic analysis is used to describe the running time or space complexity of an


algorithm as the input size grows towards infinity. It helps us ignore machine-
dependent constants and focus on how the algorithm scales.

Step 1: What is Asymptotic Analysis?

 It’s a method to describe the efficiency of an algorithm in terms of input size n.


 It focuses on growth rate, not exact number of operations.

Step 2: Key Notations

Notation Describes Meaning


Upper Bound (Worst The algorithm runs at most proportional to f(n) for
O(f(n))
Case) large inputs
Lower Bound (Best
Ω(f(n)) The algorithm runs at least proportional to f(n)
Case)
Tight Bound (Average The algorithm always runs proportional to f(n), in best
Θ(f(n))
Case) and worst cases

Step 3: Formal Definitions

Let f(n) and g(n) be functions mapping positive integers to positive real numbers.

1. Big-O:
f(n) = O(g(n)) if ∃ constants c > 0 and n₀ such that:
f(n) ≤ c·g(n) for all n ≥ n₀
2. Big-Ω:
f(n) = Ω(g(n)) if ∃ constants c > 0 and n₀ such that:
f(n) ≥ c·g(n) for all n ≥ n₀
3. Big-Θ:
f(n) = Θ(g(n)) if f(n) is both O(g(n)) and Ω(g(n))

Step 4: Common Growth Rates

Order Name Examples


O(1) Constant array access, x = y + 1
O(log n) Logarithmic binary search
O(n) Linear linear scan
O(n log n) Linearithmic merge sort, heap sort
O(n²) Quadratic bubble sort, insertion sort
O(2ⁿ) Exponential brute-force recursion (TSP)
O(n!) Factorial permutations

Step 5: Properties

 Constants are ignored: O(3n²) = O(n²)


 Lower-order terms are ignored: O(n² + 100n) = O(n²)

Practice Questions & Answers

✅ Q1: Simplify the expression

Problem:
Simplify this function and express it in Big-O notation:

3
( )=3 + 5 log⁡ + 20

Step 1: Identify all the terms:

 3 3 → cubic term
 5 log⁡ → linearithmic term
 20 → constant term
Step 2: Understand Big-O principles:

 In asymptotic analysis, only the fastest-growing term matters as → ∞.


 Ignore constants and lower-order terms.

Step 3: Pick the dominant term:

3
 > log > 1
 So, 3 3 dominates the others

Step 4: Apply Big-O:

3 3
( )=3 + 5 log⁡ + 20 = ( )

✅ Final Answer: ( 3
)

✅ Q2: Show that ( ) = 5 2


+ 2 + 1 = Θ( 2
)
2
We want to show that this function is tightly bound by . That means:

2 2
∃ 1, 2, 0 such that: 1 ≤ ( )≤ 2 for all ≥ 0

2
Step 1: Show Upper Bound: ( ) = ( )

2 2 2 2 2
( )=5 +2 +1≤5 +2 + =8

✅ So choose:

 2 =8
 0 =1

2
⇒ ( )≤8 for all ≥1

2
Step 2: Show Lower Bound: ( ) = Ω( )
2 2
( )=5 +2 +1≥5

✅ So choose:

 1=5
 0 =1

2
⇒ ( )≥5 for all ≥1

Step 3: Conclude both bounds hold:

2 2 2
5 ≤ ( )≤8 ⇒ ( ) = Θ( )

✅ Final Answer: Θ( 2
)

✅ Q3: Arrange the functions in increasing order of growth

List:

 ( )
 (1)
 ( log⁡ )
 ( 2)
 (log⁡ )

Step-by-step comparison:

We order them by how fast they grow:

1. O(1) — constant, doesn’t grow


2. O(log n) — grows slowly
3. O(n) — linear
4. O(n log n) — grows faster than linear, slower than quadratic
5. O(n^2) — quadratic, grows quickly

✅ Final Answer:
(1) < (log ) < ( ) < ( log ) < ( 2
)

✅ Q4: True or False

Statement:
2 2
If ( ) = ( ) and ( ) = ( ), then ( ) + ( ) = ( )

Step 1: Understand the meaning:

You are adding two functions:

 One is ( ) → grows linearly


 One is ( 2 ) → grows quadratically

Total growth is determined by the dominant term.

Step 2: Example:

Let:

 ( )=5
2
 ( )=3

Then:

2 2
( )+ ( )=5 +3 = ( )
2
(because 3 dominates as → ∞)

✅ Final Answer: True

✅ Q5: Which function grows faster: log⁡ or 1.1


?

(This is already explained, but here's a concise recap)


Step 1: Take the ratio:

log⁡ log
1.1 = 0.1
→ 0 as →∞

Step 2: Interpretation:

 log much slower than 0.1


 So the whole ratio goes to 0

✅ Conclusion:

1.1 1.1
log⁡ = ( )⇒ grows faster

Topic 2: Solving Recurrence Relations


Why This Topic Matters:

Recurrence relations are equations that define a function in terms of its value on
smaller inputs. They commonly appear when analyzing the time complexity of recursive
algorithms.

For example, Merge Sort divides the array into halves:

( ) = 2 (2) +

Solving this tells us the time complexity of Merge Sort is ( log⁡ ).

✅ Step-by-Step Explanation

Step 1: What Is a Recurrence Relation?


It’s an equation of the form:

( )= ⋅ ( )+ ( )

Where:

 a = number of subproblems
 n/b = size of each subproblem
 ( ) = cost of dividing and combining

Step 2: Techniques to Solve Recurrences

We'll study three main methods:

Method Description
1. Substitution Guess the answer, then prove it by induction
2. Recursion Tree Visualize the work done at each level of recursion
3. Master Theorem Plug into a general formula for divide-and-conquer recurrences

Technique 1: Substitution Method


Idea:

1. Guess the solution


2. Use mathematical induction to prove the guess is correct

Example 1:

( ) = ( − 1) +

Step 1: Guess:

Let’s guess:

2
( )= ( )
Step 2: Prove using induction

Base case:
(1) = → holds for some constant

Inductive step:
2
Assume ( ) ≤ ⋅

Now show:

2
( + 1) = ( ) + ( + 1) ≤ ⋅ + ( + 1)

We need:

2
⋅ + ( + 1) ≤ ⋅ ( + 1)2

Try it algebraically — it works for large enough c.

✅ So, proven:

2
( )= ( )

Technique 2: Recursion Tree Method


Idea:

1. Draw the recursion tree


2. Add the cost at each level
3. Sum all levels for total cost

Example 2:

( ) = 2 ( /2) +

Step 1: Break it down:

 Level 0: 1⋅ 1 = 20 ⋅ 20 =
 Level 1: 2 ⋅ 2 = 21 ⋅ 21 =
 Level 2: 4 ⋅ = 22 ⋅ 2 =
4 2
 …
 Level log n: 2 ⋅ =
2

Number of levels = log⁡2

Step 2: Sum all levels:

( )= + + + ⋯+ = ⋅ log⁡

✅ Final result: ( ) = ( log⁡ )

Step-by-Step Explanation of the Recursion Tree Depth

✅ Step 1: Understand the Recurrence Structure

You are given:

( ) = 2 ( /2) +

Which means:

 You divide the problem of size n into 2 subproblems, each of size n/2
 You spend n work combining their results

This is a classic divide-and-conquer recurrence.

✅ Step 2: Build the Recursion Tree

We construct a tree where:

 Each node represents a subproblem


 The value at each node is the amount of work done at that level (excluding
recursion)

Let's see what happens level by level:


Level 0 (top): size = n, cost = n

We start with 1 problem of size n.


So the cost at level 0 is: 1 ⋅ =

Level 1: two subproblems of size n/2

From the previous level, we had 2 subproblems:

 Each of size n/2


 Each does n/2 work

2 ⋅ ( /2) =

Level 2: four subproblems of size n/4

From each n/2 comes two subproblems of size n/4 → total of 4 subproblems:

4 ⋅ ( /4) =

Continue this way:

At level iii:

 There are 2 subproblems, each of size /2


 Total work at that level:

2 ⋅ =
2

✅ So every level costs n work.

✅ Step 3: When Does Recursion Stop?

The recursion stops when the subproblem size becomes 1.

Let’s find the number of times we can divide n by 2 until we reach 1:


We want:

=1⇒2 = ⇒ = log 2
2

✅ Conclusion:

 Total number of levels in the tree is log 2 + 1, but we often write it as Θ(log⁡ )
 At each level, the cost is n
 So the total cost:

( ) = ⏟+ + +⋯+ = log⁡2
log2 levels

✅ Final Answer:

( ) = Θ( log⁡ ) ,and the number of levels is log 2

Visual Summary:

# of Size per Work per


Level Total Work
Subproblems Problem Problem
0 1 n n n
1 2 n/2 n/2 n
2 4 n/4 n/4 n
... ... ... ... ...
log n 1 1 n

Recursion Tree for ( ) = 3 ( /2) +

✅ Step 1: Understand the Structure

 Each recursive call creates 3 subproblems, each of size n/2


 The cost to combine them at the current level is n
This is not the same as Merge Sort anymore — it's "heavier" due to 3 recursive
branches.

✅ Step 2: Level-by-Level Breakdown

We build a tree where:

 Each node represents a subproblem.


 Each level contains many subproblems.
 Each subproblem contributes a cost of /2 at level i

Level 0 (Root):

 1 node with cost = n

Level 1:

 Each node produces 3 nodes


 So we have 31 = 3 nodes
 Each of size = n/2
3
 Total cost = 3 ⋅ ( /2) = 2

Level 2:

 32 = 9 nodes, each of size n/4


9
 Total cost = 9 ⋅ ( /4) = 4

General Pattern:

At level iii:

 Number of nodes: 3
 Size of each:
2
 Work per node: 2
 Total work at level:

3
3 ⋅ = ⋅( )
2 2

So the cost increases geometrically!

✅ Step 3: When Does It Stop?

As before, the recursion ends when subproblem size = 1:

= 1 ⇒ = log 2
2

So the tree has log 2 levels

✅ Step 4: Total Work = Sum of All Levels

Now we sum all levels from i = 0 to = log 2 :

log2
3
( )=∑ ⋅( )
2
=0

Take n out:

log2
3
( )= ⋅ ∑( )
2
=0

3
This is a geometric series with ratio = 2 > 1

✅ Step 5: Use Formula for Geometric Series


+1 −1
∑ =0 = −1

Let = log 2 , then:

3
(2)log2 +1 − 1
( )= ⋅
3
−1
2

Approximate the dominant part:

3 log2 3
log2 ( ) 0.585
( ) = 2 ≈
2

So the total cost is approximately:

0.585 1.585
( )= ⋅ = Θ( )

Final Summary:

Level # Nodes Size per Node Total Work


0 1 n n
3
1 3 n/2 4
9
2 9 n/4 4
… … … …
3
= log 2 3 /2 ⋅( )
2
... ... ...

✅ Final Result:

log2 3 1.585
( ) = Θ( ) ≈ Θ( )

Technique 3: Master Theorem


Form:
Given:

( )= ( )+ ( )

log
Compare ( ) with

Master Theorem Cases:

Case If ( ) is ... Then ( ) =


log⁡ −
1 ( )= ( ) Θ( log⁡ )
log⁡
2 ( ) = Θ( ) Θ( log⁡ log⁡ )
log⁡ +
3 ( ) = Ω( ), regularity holds Θ( ( ))

Example 3:

( ) = 3 ( /2) +

This matches the form used in the Master Theorem:

( )= ⋅ ( )+ ( )

 a = 3, b = 2, f (n) = n
 log⁡ = log⁡2 3 ≈ 1.58496 ≈ 1.58
log⁡2 3
 = 1.58

Now compare:

1
 ( )= =
1 1.58
 grows slower than

More formally:

1.58− )for some small > 0


= (

This is valid because:

1.08
 If = 0.5, then 1.58 − 0.5 = 1.08 >; 1, so = ( ), and so on.

This matches Case 1 of the Master Theorem, which says:


log −
If ( ) = ( ) for some > 0, then

log
( ) = Θ( )

✅ Final Result:
1.58−
Since ( ) = = ( ), we apply Case 1:

log2 3 1.58
( ) = Θ( ) ≈ Θ( )

Intuition:

 You're comparing how fast the "combine" step ( ) = grows vs. the recursion
depth cost.
 Because ( ) is slower than the critical function log2 , the total cost is
dominated by the recursion, not the combine step.

1.58−
( )= = ( ) ⇒ Case 1

✅ Final Answer:

log⁡2 3
( ) = Θ( )

Practice Questions & Answers

✅ Q1: Solve using recursion tree

( ) = 2 ( /2) +

✅ Answer:

 Total at each level = n


 Levels = log
 Total cost = log⁡

( ) = Θ( log⁡ )
✅ Q2: Solve using master theorem
2
( ) = 4 ( /2) +

 a = 4, b = 2, log⁡ = log⁡2 4 = 2
 ( ) = 2 = Θ( 2 )

This matches Case 2


✅ So,

2
( ) = Θ( log⁡ )

✅ Q3: Solve this recurrence:

( ) = ( − 1) + 1, (1) = 1

This is a simple recursion that builds linearly.

Unfold:

( ) = ( − 1) + 1 = ( − 2) + 2 =. . . = (1) + ( − 1)
= 1 + ( − 1) =

✅ Final Answer:

( ) = Θ( )

✅ Q4: Use substitution to prove ( ) = ( − 1) + = ( 2


)

We try to prove:

2
( )≤ ⋅

Induction base:

(1) = 1 ≤ ⋅ 12 ⇒ ≥1

Assume for k:
2
( )≤ ⋅

Then for k+1:

2
( + 1) = ( ) + ( + 1) ≤ ⋅ + ( + 1)

Want:

2
⋅ + ( + 1) ≤ ⋅ ( + 1)2

Choose c large enough (e.g., 2) and inequality holds.

✅ Proven:

2
( )= ( )

Topic 3: Time and Space Complexity


Objective:

Understand how to:

 Measure how long an algorithm takes (Time Complexity)


 Measure how much memory it uses (Space Complexity)

Both are critical for evaluating the efficiency of an algorithm.

✅ Step-by-Step Explanation

Step 1: What Is Time Complexity?

Time complexity is the number of elementary operations (e.g., additions, comparisons)


performed by an algorithm as a function of the input size n.
It answers:

“How does the running time grow as input size increases?”

Example 1: Linear Search

Search for a number x in array A of size n:

for i in range(0, n):


if A[i] == x:
return i

 Worst-case: we check all elements → n comparisons


 ✅ Time complexity: O(n)

Example 2: Nested Loops

for i in range(n):
for j in range(n):
print(i, j)

 Outer loop: runs n times


 Inner loop: runs n times per outer iteration
 Total operations: ⋅ = 2
 ✅ Time complexity: ( 2 )

Step 2: What Is Space Complexity?

Space complexity measures the amount of memory used by the algorithm, including:

 Input storage
 Output storage
 Temporary variables
 Call stack (for recursion)

Example 3: Recursive Fibonacci


def fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)

 Recursive stack depth: n


 Each call uses constant space
 ✅ Space complexity: ( )

Total vs. Auxiliary Space

Type Description
Total Space Includes input and output
Auxiliary Space Memory used excluding input (i.e., scratch)

Step 3: Best, Worst, and Average Cases

Case Description
Best The minimum time taken on any input
Worst The maximum time taken on any input
Average Expected time over all random inputs

Most analyses use worst-case complexity (guaranteed upper bound).

Step 4: Time Complexity Classifications

Complexity Description Examples


(1) Constant time Accessing array element
(log⁡ ) Logarithmic time Binary search
( ) Linear time Linear scan
( log⁡ ) Linearithmic time Merge sort, Heap sort
( 2) Quadratic time Bubble sort, Matrix mult.
(2 ) Exponential time Brute-force TSP
( !) Factorial time Generating permutations
Step 5: Practical Rules

 Ignore constants: (3 ) = ( )
 Keep dominant term: ( 2 + ) = ( 2 )
 Only worst-case unless stated otherwise

Practice Questions & Answers

✅ Q1: What is the time complexity of this code?

for i in range(n):
for j in range(i, n):
print(i, j)

Answer:

 Outer loop: n
 Inner loop: n − i, which averages to n/2
2
 Total: ∑ =1( − ) = ( + 1)/2 = ( )

✅ Final Answer: ( 2
)

✅ Q2: Analyze time and space complexity of this recursive function:

def sum(n):
if n == 0:
return 0
return n + sum(n - 1)

Time:

 One call per number from n to 0 → n calls


 ✅ ( )

Space:

 Call stack stores n frames before reaching base case


 ✅ ( )

✅ Q3: True or False: A loop that runs from 1 to 100 has time complexity
(1)

for i in range(100):
print(i)

✅ Answer: True

Why? The number of operations does not depend on n. It’s constant.

✅ Q4: Compare time complexity of:

1. 1( ) = 1000
2. 2 ( ) = ⁡log⁡

Which grows faster?

Let’s analyze:

 When n is small, 1000n may be larger


 As n → ∞, ⁡log⁡ dominates

✅ So:

 1( )= ( )
 2( )= ( ) (grows faster)

✅ Q5: Identify time and space complexity of Merge Sort

 Recurrence: ( ) = 2 ( /2) +
 Depth = log , each level = n
 Total time: ( ⁡log⁡ )
 Uses auxiliary array in each merge step → ( ) space
Topic 4: Amortized Analysis
Objective:

Amortized analysis provides the average cost per operation over a sequence of
operations — even when individual operations can be expensive.

It’s different from:

 Worst-case analysis (analyzing a single worst operation),


 Average-case analysis (based on probability).

✅ Step-by-Step Explanation

Step 1: Why Amortized Analysis?

Imagine a dynamic array (like a Python list or C++ vector) that:

 Doubles its size when full.


 Most insertions take constant time, but some take longer when resizing is
needed.

We use amortized analysis to show that:

Even though some operations are costly, the average cost per operation is still small.

Step 2: Amortized vs Worst-Case

Operation Cost (worst-case) Amortized Cost


Insert in dynamic array O(n) (during resize) O(1)
Stack operations O(1) O(1)
Table expansion O(n) (copying) O(1) (amortized)
Step 3: Amortized Analysis Methods
Method Description
1. Aggregate Total cost of n operations ÷ n
2. Accounting Assign virtual "charges" (credits) per operation
3. Potential Define a potential function representing stored energy

Let’s explore each with an example.

Method 1: Aggregate Analysis


Example: Appending to a Dynamic Array

You start with an array of size 1. Every time it fills up, you double its size and copy all
elements.

Let’s analyze the cost of n appends.

Total Cost:

 Insertions without resizing → cost = 1


 Insertions with resizing:
o At size 1 → copy 1
o At size 2 → copy 2
o At size 4 → copy 4
o …
o Total copies = 1 + 2 + 4 + 8 + ⋯ + /2 = −1

So total cost for n operations =

(simple inserts) + ( − 1) (copies) = 2 − 1

✅ Amortized cost per insertion:

2 −1
= (1)
Method 2: Accounting Method
We overcharge cheap operations and save credits for expensive ones.

Example: Dynamic Array (again)

Let’s charge:

 3 units for each insertion


o 1 unit for actual insert
o 2 units saved (credits) for future copies

Whenever resizing occurs:

 We have enough credits stored to pay for the copy

✅ Result:
Every operation is charged 3 units, so:

(1) amortized cost

Method 3: Potential Method


You define a potential function Φ that represents the "energy" stored in the system.

Amortized cost of an operation:

Amortized Cost = Actual Cost + ΔΦ

Where:

ΔΦ = Φafter − Φbefore

Example: Stack with MULTIPOP

PUSH(x): adds element to top


POP(): removes top
MULTIPOP(k): pops up to k elements
Worst-case for MULTIPOP = ( )

But if we do n operations total:

 Each PUSH increases size by 1


 Each POP/MULTIPOP decreases size

Define potential:

Φ = stack size

Total amortized cost ≤ number of PUSHes = O(n)

✅ So:

Amortized cost per operation=O(1)

Practice Questions & Answers

✅ Q1: A dynamic array doubles its size when full. What is the amortized cost
of n appends?

Answer:

 Total copying cost = ( )


 Appends = ( )
 Total = ( )

✅ Amortized cost per append:

(1)

✅ Q2: What is the amortized cost of performing n stack PUSH and


MULTIPOP operations?

 Each element is pushed once


 Each can be popped at most once

✅ So:
(1) per operation

✅ Q3: Use accounting method. Suppose a table grows by tripling instead of


doubling. What should we charge per insertion to keep cost amortized
( )?

Solution Sketch:

 Copying cost when resizing:


o At size 1 → copy 1
o At size 3 → copy 3
o At size 9 → copy 9
o Total copies over n inserts = < n

Charge 4 or 5 units per insert:

 1 unit for the insert


 Others saved as credits

✅ With proper charge plan, still:

(1) amortized

✅ Q4: True or False: In amortized analysis, we must consider probabilistic


input distributions.

Answer:
❌ False

That’s average-case analysis.


Amortized analysis considers all possible sequences, not random inputs.

✅ Q5: Why is amortized analysis helpful?

Answer:

 Shows that occasional expensive operations don't ruin overall performance


 Helps analyze data structures like dynamic arrays, stacks, queues, splay trees

✅ Example: Append in dynamic array

Summary Table
Method Concept Result Type
Aggregate Total cost / number of ops Average
Accounting Prepay expensive operations Average
Potential Use energy function to track cost Flexible

Summary Sheet: Tools of Algorithm Analysis

1. Asymptotic Analysis

Notation Meaning Describes...


( ( )) Upper bound (worst-case or ceiling) Max runtime growth
Θ( ( )) Tight bound (average-case if exact) Exact growth rate
Ω( ( )) Lower bound (best-case or floor) Minimum guaranteed growth

Key Concepts:
 Ignore constants and lower-order terms
 Focus on dominant term
 Compare algorithms using asymptotic behavior

Example:

2 2
3 +4 +7= ( )

2. Solving Recurrence Relations

Form: ( )= ( / )+ ( )

Method When to Use


Substitution You can guess the solution and prove it
Recursion Tree You want to visualize cost at each level
Master Theorem The recurrence is divide-and-conquer type

Master Theorem Cases: ( )= ( / )+ ( )

Case Condition Result


log⁡ −
1 ( )= ( ) Θ( log⁡ )
log⁡
2 ( ) = Θ( ) Θ( log⁡ log⁡ )
log⁡ +
3 ( ) = Ω( ) & regularity Θ( ( ))

3. Time and Space Complexity

Complexity Class Description Example Algorithms


(1) Constant Array access
(log⁡ ) Logarithmic Binary search
( ) Linear Linear search
( ⁡log⁡ ) Linearithmic Merge sort, heap sort
( 2) Quadratic Bubble sort
(2 ) Exponential Subset generation

Space Complexity:

 Measures extra memory used (not including input)


 Recursive algorithms often use stack space ( ℎ)
4. Amortized Analysis

Method Description
Aggregate Total cost over all operations ÷ count
Accounting Overcharge cheap ops to cover costly ones
Potential Use a function to track "energy" saved

Use Cases:

 Dynamic arrays
 Stacks with MULTIPOP
 Incremental algorithms

Example:
Appending n elements to a dynamic array with doubling capacity →
✅ Total time ( ), so amortized time per insert = (1)

Practice Problems (All Topics Combined)

✅ Q1: Simplify:

( ) = ( /2) + log⁡

Solution:
Using recursion tree or Master Theorem:

 a = 1, b = 2, ( ) = log⁡
 Compare with log2 1 = 0 = 1
 log⁡ = Ω( 0 ), but doesn't satisfy regularity condition

✅ So use recursion tree:

 Each level: log⁡ , log⁡( /2), log⁡( /4), . . . , log⁡1


 Sum: log⁡ + log⁡( /2)+. . . +1 = (log 2 )

Answer: ( ) = (log 2 )
✅ Q2: Determine time complexity of this code:

for i in range(1, n):


j=i
while j > 1:
j = j // 2

Solution:

 Inner loop runs log times


 Total work: ∑ =1 log⁡ = log⁡(1 ⋅ 2 ⋅. . .⋅ ) = log⁡( !) = ( ⁡log⁡ )

Answer: ( log⁡ )

✅ Q3: Solve using Master Theorem:

( ) = 3 ( /4) + log⁡

 a = 3, b = 4
log4 3 0.792
 ( ) = log⁡ , compare with ≈

Since:

0.792+
log⁡ = Ω( ) for = 0.1

And regularity condition holds, so apply Case 3:

Answer: ( ) = Θ( log⁡ )

✅ Q4: A dynamic array doubles its size when full. What’s the amortized cost
of insert?

Solution:

 Cost of n insertions = ( )
 Amortized = (1)
✅ Q5: Time and space complexity of this function:

def countDown(n):
if n == 0:
return
print(n)
countDown(n - 1)

Solution:

 Time: 1 call per number → ( )


 Space: recursive stack → ( )

✅ Q6: Find Big-O:


2
3 + 5 log⁡ + 7 = (? )

Solution:
2
Dominant term =

2
Answer: ( )

✅ Q7: Use accounting method: Stack allows PUSH, POP, and MULTIPOP(k).
Amortized cost?

Solution:

 Each item can be popped only once


 Amortized cost = (1) for all ops

✅ Q8: Determine time complexity:

def rec(n):
if n <= 1:
return
rec(n - 1)
rec(n - 1)

Solution:

 Tree recursion with 2 branches each time:


( ) = 2 ( − 1) ⇒ ( ) = (2 )

Answer: (2 )

Here is a carefully crafted set of True/False and Multiple Choice Questions (MCQs)
covering all four topics in Tools of Algorithm Analysis:

✅ Section 1: True or False Questions


Each question is followed by its correct answer and a brief justification.

T/F 1:

The time complexity of accessing an element in an array is ( ).

❌ False
✅ Accessing an array element by index is (1).

T/F 2:
2 3
If ( ) = ( ), then ( ) = ( ) is also valid.

✅ True
2 3 3
Any function that is ( ) is also ( ), because grows faster.

T/F 3:

The Master Theorem can be used to solve ( ) = ( − 1) + .


❌ False
✅ Master Theorem only applies to divide-and-conquer forms like

( )= ( / )+ ( )

T/F 4:

In amortized analysis, the cost of the worst single operation determines the overall
complexity.

❌ False
✅ Amortized analysis averages over a sequence, not based on a single worst-case.

T/F 5:

The recurrence ( ) = 2 ( /2) + has a time complexity of ( log⁡ ).

✅ True
It matches Case 2 of the Master Theorem.

T/F 6:
2
If an algorithm has ( log⁡ ) time complexity, then it must be faster than an ( )
algorithm on every input.

❌ False
✅ For small n, 2
may be faster. Asymptotic notation is about large n behavior.

T/F 7:

In a recursion tree, the number of levels is always log 2 for all divide-and-conquer
algorithms.

❌ False
✅ Only when the subproblem size is divided by 2. If it's n/3, then it's log 3 , etc.
T/F 8:

A single expensive operation can make the average amortized cost high.

❌ False
✅ If the expensive operation is rare and distributed across cheap ones, the amortized
cost can remain low.

✅ Section 2: Multiple Choice Questions (MCQs)


Each question has 4 options, with the correct one marked and explained.

MCQ 1:

Which of the following is not asymptotic notation?

A. Θ( )
B. Ω( )
C. ( )
D. Δ( )

✅ Answer: D
Δ( ) is not an asymptotic notation.

MCQ 2:

Given ( ) = 4 ( /2) + , what is the time complexity?

A. Θ( log⁡ )
B. Θ( 2 )
C. Θ( )
D. Θ(log⁡ )
✅ Answer: B
2−
a = 4,b = 2, so log = 2. Since ( ) = = ( ), it's Case 1.

MCQ 3:

What is the time complexity of binary search?

A. ( )
B. (log⁡ )
C. ( ⁡log⁡ )
D. (1)

✅ Answer: B
Binary search halves the array every step → log 2 comparisons.

MCQ 4:

Which operation has amortized cost (1)?

A. Appending to a dynamic array


B. Matrix multiplication
C. Sorting with merge sort
D. Binary search

✅ Answer: A
Dynamic arrays resize occasionally, but appending is (1) amortized.

MCQ 5:
2
Which of the following must be true about a function f ( ) = ( )?

A. It grows faster than any ( log⁡ ) function


B. It is the slowest-growing polynomial
2
C. It grows no faster than some constant times
D. It always dominates all linear functions
✅ Answer: C
( 2 ) means: there exists a constant ccc such that ( ) ≤ ⋅ 2
for large n.

MCQ 6:

Which is an example where amortized analysis is most useful?

A. Linear search
B. Counting sort
C. Stack with MULTIPOP
D. Quicksort

✅ Answer: C
MULTIPOP can be expensive occasionally but cheap on average.

MCQ 7:

Which recurrence matches Case 2 of the Master Theorem?

A. ( )=2 ( /2) + log⁡


B. ( )=2 ( /2) +
C. ( )=2 ( /2) + 0.5
D. ( )=2 ( /2) + 2

✅ Answer: B
log
Case 2: ( ) = Θ( ). Here, a = 2, b =2, so ( ) = Θ( ).

MCQ 8:

What is the space complexity of merge sort?

A. (1)
B. (log⁡ )
C. ( )
D. ( log⁡ )
✅ Answer: C
Merge sort requires a temporary array for merging → uses ( ) auxiliary space.

Here's a second set of advanced-level True/False and Multiple Choice Questions (MCQs) for
graduate-level learners. These questions go beyond memorization and require deeper
understanding, analysis, and abstraction.

✅ Advanced Graduate-Level: True or False

T/F 1:

If an algorithm has amortized complexity (1), then its worst-case time complexity for
every individual operation must also be (1).

❌ False
✅ Amortized complexity allows some operations to be expensive, as long as the
average over all operations is ( ).

T/F 2:

If ( ) = ( ( )), then ( ) = ( ( )) must also be true.

✅ True
✅ Little-o ( ) = ( ( )) implies that lim →∞ ( )/ ( ) = 0, which means ( ) ∈
( ( )).

T/F 3:

The recurrence ( ) = (√ ) + 1 has time complexity (log⁡ ).

❌ False
✅ Let’s solve: The number of steps until nnn becomes 1 is log⁡log⁡
( ) = (√ ) + 1 ⇒ ( ) = Θ(log⁡log⁡ )

T/F 4:

Potential method in amortized analysis allows the cost of an operation to be negative.

✅ True
✅ If the potential function decreases, then ΔΦ < 0, resulting in negative amortized
cost.

T/F 5:

The Master Theorem cannot be applied to recurrences where the subproblem size is not
a fixed fraction of n, e.g., ( ) = ( − 1) + ( ).

✅ True
✅ The Master Theorem only works for divide-and-conquer forms ( ) = ( / )+
( ), not linear recursions.

✅ Advanced Graduate-Level: MCQs

MCQ 1:

Which of the following recurrences cannot be solved directly by the Master Theorem?

A. ( ) = 4 ( /2) +
B. ( ) = 2 ( /2) + log⁡
C. ( ) = ( − 1) +
D. ( ) = 8 ( /2) + 3

✅ Answer: C
( ) = ( − 1) + is not of the form required by the Master Theorem.
MCQ 2:

Let ( ) = 3 ( /2) + . What is the time complexity?

A. ( log⁡ )
B. ( log2 3 )
C. ( 2)
D. ( )

✅ Answer: B
1.585
log⁡2 3 ≈ 1.585, so ( ) = Θ( )

MCQ 3:

Which of the following is not a valid technique for solving recurrence relations?

A. Master Theorem
B. Recursive Tree Expansion
C. Recursion Elimination
D. Potential Function

✅ Answer: D
Potential Function is used in amortized analysis, not for solving recurrences.

MCQ 4:

Let ( ) = ( /2) + ( /4) + . What is the time complexity?

A. ( )
B. ( ⁡log⁡ )
C. ( ⁡log⁡log⁡ )
D. ( log2 3 )

✅ Answer: B
This recurrence solves to Θ( ⁡log⁡ ) using recursion tree or Akra-Bazzi method.

MCQ 5:
Which best describes the amortized time of incrementing a binary counter from 0 to n?

A. ( log⁡ )
B. ( )
C. ( log⁡log⁡ )
D. (log⁡ )

✅ Answer: B
Each bit flips once for every power of two → total work for n increments is ( )

MCQ 6:

Let ( ) = 2 ( /2) + log⁡ . Choose the closest tight bound.

A. ( log⁡ )
B. ( (log⁡ )2 )
2
C. ( )
D. ( √log )

✅ Answer: B
Use recursion tree:

 Each level = log⁡ , log⁡( /2), log⁡( /4), …


 Sum ≈ (log⁡ )2

MCQ 7:

Which of the following is most appropriate for analyzing an amortized complexity using
energy stored in the data structure?

A. Recursive decomposition
B. Aggregated analysis
C. Potential method
D. Divide and conquer

✅ Answer: C
Potential method assigns energy via a potential function.
MCQ 8:

The recurrence ( ) = ( − 1) + 1 has time complexity:

A. (log⁡ )
B. ( )
C. ( ⁡log⁡ )
D. (2 )

✅ Answer: B
Linear recurrence: one call per level → total work = ( )

You might also like