Module 1
1 Analyse the complexity of the following function (3)
void function(int n)
{ int count = 0;
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
for (int k=1; k<=n; k = k * 2)
count++;
}
Answer: Time Complexity of the above function O(n log2n)
void function(int n)
{ int count = 0;
for (int i=n/2; i<=n; i++) ;Executes O(n/2) times
for (int j=1; j<=n; j = 2 * j) ;Executes O(log n) times
for (int k=1; k<=n; k = k * 2) ;Executes O(log n) times
count++; ;Executes O(n/2* logn * log n) times
}
2 Solve using Iteration method T(n)=2T(n/2)+n,T(1)=1 (3)
Answer:
T(n)=2T(n/2)+n
=2[2T(n/4)+(n/2)]+n
=22 T(n/4)+n+n=22 T(n/4)+2n
=22 [2T(n/8)+n/4]+2n
=23 T(n/8)+n+2n=23 T(n/8)+3n
....After k times
=2kT(n/2k)+kn---eq1
Sub problem size is 1 after n/2k =1
ie 2k=n,
k=log n
then eq 1becomes
T(n)=nT(1)+n logn
=n*1+nlogn
=O(nlogn).
Analyse the complexity of the following functions
3 i)function(int n)
{ (
if (n==1) return; 4
for (int i=1; i<=n; i++) )
{
for (int j=1; j<=n; j++)
{ printf("*"); break; }
}
}
Answer: Time Complexity of the above function O(n). The inner loop is bounded by n, but
due to break statement it is executing only once.
ii) void function(int n)
{
int i = 1, s =1;
while (s <= n)
{
i++;
s += i;
printf("*");
}
}
Answer: Time Complexity of the above function O(√n).
We can define the terms ‘s’ according to relation si = si-1 + i. The value of ‘i’ increases by
one for each iteration. The value contained in ‘s’ at the ith iteration is the sum of the first ‘i’
positive integers. If k is total number of iterations taken by the program, then while loop
terminates if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
4 Solve using Recursion Tree method 5
T(n)=3T(n/4)+n2
The fully expanded tree has height log4n (it has log4n + 1 levels).
Sub problem size at depth i =n/4i
Sub problem size is 1 when n/4i=1 => i=log4n
So, no. of levels =1+ log4n
Cost of each level = (no. of nodes) x (cost of each node)
5 Define the terms Best case, Worst case and Average case time complexities.
There are 3 cases, in general, to find the complexity function f(n):
1. Best case: The minimum value for any possible input.
2. Worst case: The maximum value for any possible input.
3. Average case: The value of which is in between maximum and minimum for any
possible input.
Best Case – 1 Mark, Worst Case – 1 Mark, Average Case – 1 Mark
6 What is the smallest value of n such that an algorithm whose running times is 100n2
runs faster than an algorithm whose running time is 2n on the same machine?
A= 100n2 B= 2n
Let’s start checking from n=1 and go up for values of n which are power of 2.
n=1⇒100×12=100>21 ; n=2⇒100×22=400>22 ; n=4⇒100×42=1600>24
n=8⇒100×82=6400>28 ; n=16⇒100×162=25600<216
Somewhere between 8 and 16, A starts to run faster than B. Let’s do what we were
doing but now we are going to try middle value of the range, repeatedly (binary
search).
n=(8+16)/2=12⇒100×122=14400>212
n=(12+16)/2=14⇒100×142=19600>214
n=(14+16)/2=15⇒100×152=22500<215
So, at n=15, A starts to run faster than B.
Minimum value of n=15 – 1 Mark. Steps – 2 Marks
7 a) Determine the time complexities of the following two functions fun1() and fun2():
int fun1(int n)
{
if (n <= 1) return n;
return 2*fun1(n-1);
}
int fun2(int n)
{
if (n <= 1) return n;
return fun2(n-1) + fun2(n-1);
}
Answer
Fun1 – O(n) - 1 Mark
Fun2 – O(2n) – 1 Mark
Time complexity of fun1() can be written as
T(n) = T(n-1) + C which is O(n)
Time complexity of fun2() can be written as
T(n) = 2T(n-1) + C which is O(2^n)
b) Find the solution to the recurrence equation using iteration method:
T(2k) = 3 T(2k-1) + 1, T (1) = 1
T (2k) = 3 T (2k-1) + 1
= 32 T (2k-2) + 1 + 3
= 33 T (2k-3) + 1 + 3 + 9
. . . (k steps of recursion (recursion depth))
= 3k T (2k-k) + (1 + 3 + 9 + 27 + … + 3k-1)
= 3k + ( ( 3k – 1 ) / 2 )
= ( (2 * 3k) + 3k – 1 )/2
= ( (3 * 3k) – 1 ) / 2
= O((3k+1 – 1) / 2)
Solution – 1 Mark Steps – 2 marks.
c) Solve the recurrence using recursion tree method:
T(1) = 1 T(n) = 3T(n/4) + cn2
So summing: cn2+ 3/16 cn2 + (3/16)2 cn2+…. = O(n2)
Solution - 1 Mark. Recursion Tree with Minimum 3 levels – 3 Marks
8 a) Determine the best case and worst-case time complexity of the following function:
void fun(int n, int arr[])
{
int i = 0, j = 0;
for(; i < n; ++i)
while(j < n && arr[i] < arr[j])
j++;
}
Answer:
Both O(n)
Best case Expression – 1.5 Marks, Worst case expression – 1.5 Marks
In the first look, the time complexity seems to be O(n^2) due to two loops. But, the
variable j is not initialized for each value of variable i. The while loop will execute only once
for all value of i. So 0the inner loop runs at most n times.
9 Is 2n+1 = O(2n) ? Is 22n = O(2n)? Justify your answer. (3)
Answer:
Yes ,justification -giving the constants, c and n0, specified in the definition. For eg. c=2 n0=1
is one solution.i.e.2n+1 = O(2n) can be written as 2n+1 <= c.2n,if c=2, 2n+1 = 2.2n => 2n+1 = 2n+1 for
all n>=n0,where n0 >=1 . (1.5 Marks)
No ,justification -showing no constant could be derived for every n. For eg. Suppose 22n =
O(2n). Then there is a constant c > 0 such that c > 2n. Since 2n is unbounded, no such c can
exist.(1.5 Marks)
10 State Master’s Theorem. Find the solution to the following recurrence equation (3)
using Master’s theorem.
(1 mark)
a) T (n) = 2T (n/2)+ n log n 1
Solution-
We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).
Then, we have-a = 2,b = 2,k = 1,p = 1
Now, a = 2 and bk = 21 = 2.
Clearly, a = bk.
So, we follow case-02.
Since p = 1, so we have-T(n) = θ (nlogba.logp+1n)
T(n) = θ (nlog22.log1+1n)
Thus, T(n) = θ (nlog2n)
11 a) T (n) = 2nT (n/2) + nn 1
Master method does not apply, (a is not constant).
12 Analyse the complexity of the following program 3
main ( )
{
for ( inti=1; i<=n;i=i*2)
sum =sum+i+func(i )
}
void func(m )
{
for ( int j=1; j<=m; j++)
Statement with O(1 ) complexity
Answer:
O(n).
For loop in main( ) function executes log2n times.
Hence loop in function executes 20,21,22,…,2k times where k= log2n
Total time = 20+21+22+…+2k = 2log2 n * 2 = 2*n = O(n).
(Correct order (either big O or big Theta) + any valid explanation
13 a) Using iteration solve the following recurrence equation T(n)= 2 if n=1 else T(n)=
2T(n/2)+2n+3
Θ(n logn)
Correct derivaton using iteration method (5 Marks)
Answer:
T(n)=2T(n/2)+n
=2[2T(n/4)+(n/2)]+n
=22 T(n/4)+n+n=22 T(n/4)+2n
=22 [2T(n/8)+n/4]+2n
=23 T(n/8)+n+2n=23 T(n/8)+3n
....After k times
=2kT(n/2k)+kn---eq1
Sub problem size is 1 after n/2k =1
ie 2k=n,
k=log n
then eq 1becomes
T(n)=nT(1)+n logn
=n*1+nlogn
=O(nlogn).
b)Using Recursion Tree method, solve. Assume constant time for small values of n.
T(n)= 2T(n/10)+ T(9n/10)+n
Answer
Θ(n logn)
Correct derivaton using recursion tree method (4 Marks)
14 Using Recursion Tree method, solve T(n)= T(n/10)+ T(9n/10)+n 3
log10/9(n) = (1 / ln(10/9)) * ln(n) = (1 / 0.10536) * ln(n) = 9.49122 * ln(n)
The cost at the top level is to do 1 partition, at the next level down it is to do 2 partitions,
then 4 partitions, then 8 and so forth. In spite of the increasing number of partitions as we go
down the levels, the cost is about the same, since all the partitions at a given level work on
the same number of array elements. At the level given by log10(n), the leftmost downward
line gets to a leaf node and terminates. Other lines give out as the depth increases, and this is
shown in the diagram with "<= cn" as the cost, since parts of the array are sorted and no
longer worked on. The deepest line downward is at the far right, with depth log10/9(n). In any
case, the algorithm is Θ(n*log(n)).
1 State Master’s Theorem. Find the solution to the following recurrence equation using (
Master’s theorem. 3
)
(1 mark)
b) T (n) = 2T (n/2)+ n log n 1
Solution-
We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).
Then, we have-a = 2,b = 2,k = 1,p = 1
Now, a = 2 and bk = 21 = 2.
Clearly, a = bk.
So, we follow case-02.
Since p = 1, so we have-T(n) = θ (nlogba.logp+1n)
T(n) = θ (nlog22.log1+1n)
Thus, T(n) = θ (nlog2n)
c) T (n) = 2nT (n/2) + nn 1
Master method does not apply, (a is not constant).
9 Explain Asymptotic notations in algorithm analysis
The main idea of asymptotic analysis is to have a measure of efficiency of algorithms
that doesn’t depend on machine specific constants, and doesn’t require algorithms to be
implemented and time taken by programs to be compared. Asymptotic notations are
mathematical tools to represent time complexity of algorithms for asymptotic analysis.
The following 3 asymptotic notations are mostly used to represent time complexity of
algorithms.
1) Θ Notation: The theta notation bounds functions from above and below, so it defines
exact asymptotic behaviour.
A simple way to get Theta notation of an expression is to drop low order terms and
ignore leading constants. For example, consider the following expression.
3n3 + 6n2 + 6000 = Θ(n3)
Dropping lower order terms is always fine because there will always be a n0 after which
Θ(n3) has higher values than Θ( n2) irrespective of the constants involved.
For a given function g(n), we denote Θ(g(n)) is following set of functions.
Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such
that 0 ≤ c1*g(n) ≤ f(n) ≤ c2*g(n) for all n ≥ n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is always between
c1*g(n) and c2*g(n) for large values of n (n ≥ n0). The definition of theta also requires
that f(n) must be non-negative for values of n greater than n0.
2) Big O Notation: The Big O notation defines an upper bound of an algorithm, it
bounds a function only from above. For example, consider the case of Insertion Sort. It
takes linear time in best case and quadratic time in worst case. We can safely say that the
time complexity of Insertion sort is O(n^2). Note that O(n^2) also covers linear time.
If we use Θ notation to represent time complexity of Insertion sort, we have to use two
statements for best and worst cases.
1. The worst case time complexity of Insertion Sort is Θ(n^2).
2. The best case time complexity of Insertion Sort is Θ(n).
The Big O notation is useful when we only have upper bound on time complexity of an
algorithm. Many times we easily find an upper bound by simply looking at the
algorithm.
O(g(n)) = { f(n): there exist positive constants c and
n0 such that 0 ≤ f(n) ≤ c*g(n) for
all n ≥ n0}
3) Ω Notation: Just as Big O notation provides an asymptotic upper bound on a
function, Ω notation provides an asymptotic lower bound.
Ω Notation can be useful when we have lower bound on time complexity of an
algorithm. As discussed in the previous post, the best case performance of an algorithm
is generally not useful, the Omega notation is the least used notation among all three.
For a given function g(n), we denote by Ω(g(n)) the set of functions.
Ω (g(n)) = {f(n): there exist positive constants c and
n0 such that 0 <= c*g(n) <= f(n) for
all n >= n0}.
11 Solve using Masters theorem
i) T(n)=2T(n/4)+√n
ii) T(n)=7T(n/2)+ n2
Solve using Masters theorem
i) O(√nlgn)
Compare with master theorem ,a=2,b=4 and f(n)=n1l2
T(n)=nlog42 =n1/2 i.e f(n) is polynomially same as t(n), therefore it is second case
therefore
T(n) = Θ(n 1/2 lg n) (case 2).
ii) O(n^lg7)
Compare with master theorem ,a=7,b=2 and f(n)=n2
T(n)=nlog27 =n2.81 i.e f(n) is polynomially smaller than t(n), therefore it is first case
therefore, ans is
⊖(nlog7) or ⊖(n2.81)