Chap1 Introduction
Chap1 Introduction
1
Contents Example: The maximum subarray problem
1.1. Introductory Example • Given an array of n numbers:
a1, a2, … , an
1.2. Algorithm and Complexity
The contiguous subarray ai, ai+1 , …, aj with 1 ≤ i ≤ j ≤ n is a subarray of
1.3. Asymptotic notation the given array and ∑ 𝑎 is called as the value of this subarray
1.4. Running time calculation The task is to find the maximum value of all possible subarrays, in other
words, find the maximum ∑ 𝑎 . The subarray with the maximum value is
called as the maximum subarray.
Example: Given the array -2, 11, -4, 13, -5, 2 then the maximum subarray is
11, -4, 13 with the value = 11+ (-4)+13 =20
1. Introductory example: the max subarray problem 1. Introductory example: the max subarray problem
2
1.1.1. Brute force algorithm to solve max subarray problem Brute force algorithm: browse all possible sub-array
• The first simple algorithm that one could think about is: Index i 0 1 2 3 4 5
a[i] -2 11 -4 13 -5 2
browse all possible sub-arrays:
i = 0: (-2), (-2, 11), (-2,11, -4), (-2,11,-4,13), (-2,11,-
4,13,-5), (-2,11,-4,13,-5,2)
ai, ai+1 , …, aj với 1 ≤ i ≤ j ≤ n,
i = 1: (11), (11, -4), (11, -4, 13), (11, -4, 13, -5), (11, -4,
13, -5, 2)
then calculate the value of each sub-array in order to find i = 2: (-4), (-4, 13), (-4, 13, -5), (-4,13,-5,2)
i = 3: (13), (13,-5), (13, -5,2)
the maximum value. i = 4: (-5), (-5, 2) int maxSum = a[0];
i = 5: (2) for (int i=0; i<n; i++) {
• The number of all possible sub-arrays: for (int j=i; j<n; j++) {
int sum = 0;
for (int k=i; k<=j; k++)
C(n, 1) + C(n, 2) = n2/2 + n/2 sum += a[k];
if (sum > maxSum)
maxSum = sum;
NGUYỄN KHÁNH PHƯƠNG }
SOICT– HUST
}
Brute force algorithm: browse all possible sub-array 1. Introductory example: the max subarray problem
• Analyzing time complexity: we count the number of additions that the algorithm need to
perform, it means we count the statement
1.1.1. Brute force
sum += a[k] 1.1.2. Brute force with better implement
must perform how many times.
The number of additions: 1.1.3. Dynamic programming
n 1 n 1 n 1 n 1
(n i )(n i 1)
( j i 1) (1 2 ... (n i))
i 0 j i i 0 i 0 2
1 n 1 n 2 n 1 n( n 1)(2n 1) n(n 1)
k (k 1) 2
2 k 1 k 1
k k
k 1 2 6
2
3
n n 2
n int maxSum = a[0];
for (int i=0; i<n; i++) {
6 2 3
for (int j=i; j<n; j++) {
int sum = 0;
for (int k=i; k<=j; k++)
sum += a[k];
if (sum > maxSum)
maxSum = sum;
} NGUYỄN KHÁNH PHƯƠNG
SOICT– HUST
}
3
1.1.2. A better implementation 1.1.2. A better implementation
Brute force algorithm: browse all possible sub-array Brute force algorithm: browse all possible sub-array
Index i 0 1 2 3 4 5 Index i 0 1 2 3 4 5
a[i] -2 11 -4 13 -5 2 a[i] -2 11 -4 13 -5 2
i = 0: 18 + (-5)=13 13
i = 0: (-2), (-2, 11), (-2,11, -4), (-2,11,-4,13), (-2,11,-4,13,-5), (-2,11,-
9 + (-4)=5 5 4,13,-5,2)
i = 1: (11), (11, -4), (11, -4, 13), (11, -4, 13, -5), (11, -4, 13, -5, 2)
(-2),(-2, 11), (-2,11, -4), (-2,11,-4,13), (-2,11,-4,13,-5), (-2,11,-4,13,-5,2)
i = 2: (-4), (-4, 13), (-4, 13, -5), (-4,13,-5,2)
-2+11 = 9 9 5+13=18 18 i = 3: (13), (13,-5), (13, -5,2)
i = 4: (-5), (-5, 2)
i = 5: (2)
4
Max subarray problem: compare the time complexity Max subarray problem: compare the time complexity between
between algorithms algorithms
The number of additions that the algorithm need to perform: Complexity n=10 Time (sec) n=100 Time (sec) n=104 Time n=106 Time
1.1.1. Brute force n3 103 10-5 106 10-2 sec 1012 2.7 hours 1018 115 days
n2 100 10-6 10000 10-4 sec 108 1 sec 1012 2.7 hours
For the same problem (max subarray), we propose 2 algorithms that requires • With small n, the calculation time is negligible.
different number of addition operations, and therefore, they will require different • The problem becomes more serious when n > 106. At that time, only
computation time. the third algorithm is applicable in real time.
The following tables show the computation time of these 2 algorithms with the • Can we do better?
assumption: the computer could do 108 addition operation per second Yes! It is possible to propose an algorithm that requires only n additions!
Complexity n=10 Time (sec) n=100 Time (sec) n=104 Time n=106 Time
n3 103 10-5 106 10-2 sec 1012 2.7 hours 1018 115 days
n2 100 10-6 10000 10-4 sec 108 1 sec 1012 2.7 hours
1. Introductory example: the max subarray problem 1.1.3. Dynamic programming to solve max subarray problem
1.1.1. Brute force The primary steps of dynamic programming:
1. Divide: Partition the given problem into sub problems
(Sub problem: have the same structure as the given problem but with smaller size)
1.1.2. Brute force with better implement
2. Note the solution: store the solutions of sub problems in a table
3. Construct the final solution: from the solutions of smaller size problems, try to
1.1.3. Dynamic programming find the way to construct the solutions of the larger size problems until get the
solution of the given problem (the sub problem with largest size)
n
5
1.1.3. Dynamic programming to solve max subarray problem 1.1.3. Dynamic programming to solve max subarray problem
The primary steps of dynamic programming: MaxSub(a)
{
1. Divide: smax = a[0]; // smax : the value of max subarray
• Define si the value of max subarray of the array a0, a1, ..., ai , i = 0, 1, 2, ..., n-1. ei = a[0]; // ei : the value of max subarray ending at a[i]
imax = 0; // imax : the index of the last element of the max sub array
• Clearly, sn-1 is the solution.
for i = 1 to n-1 {
u = ei + a[i];
3. Construct the final solution:
v = a[i];
• s0 = a0 s1 = max{a0, a1, a0 + a1} if (u > v) ei = u;
• Assume we already know the value of s0, s1, s2,…, si-1, i >=1. Now we need to calculate the value of si which is else ei = v;
the value of max subarray of the array: if (ei > smax) {
smax = ei;
a0, a1, ..., ai-1, ai .
imax = i;
• We see that: the max subarray of this array a0, a1, ..., ai-1, ai could either include the element ai or not include }
the element ai therefore, the max subarray of the array a0, a1, ..., ai-1, ai could only be one of these 2 arrays: }
}
– The max subarray of the array a0, a1, ..., ai-1 = si-1
– The max subarray of the array a0, a1, ..., ai ending at ai. = ei Analyzing time complexity:
Thus, we have si = max {si-1, ei}, i = 1,2, …, n-1. the number of addition operations need to be performed in the algorithm
where ei is the value of the max subarray a0, a1, ..., ai ending at ai. = the number of times the statement u = ei + a[i]; need to be executed
To calculate ei, we could use the recursive relation: =n
– e0 = a0;
– ei = max {ai, ei-1 + ai}, i = 1,2, ..., n-1. NGUYỄN KHÁNH PHƯƠNG
SOICT– HUST
6
Algorithm Algorithm
• The word algorithm comes from the name of a Persian mathematician Abu Ja’far • All algorithms must satisfy the following criteria:
Mohammed ibn-i Musa al Khowarizmi.
• In computer science, this word refers to a special method consisting of a sequence of (1) Input. The algorithm receives data from a certain set.
unambiguous instructions useable by a computer for solution of a problem. (2) Output. For each set of input data, the algorithm gives the solution to the
• Informal definition of an algorithm in a computer: problem.
(3) Precision. Each instruction is clear and unambiguous.
(4) Finiteness. If we trace out the instructions of an algorithm, then for all cases,
the algorithm terminates after a finite (possibly very large) number of steps.
(5) Uniqueness. The intermediate results of each step of the algorithm are uniquely
determined and depend only on the input and the result of the previous steps.
Input Algorithm Output (6) Generality. The algorithm could be applied to solve any problem with a given
form
• Example: The problem of finding the largest integer among a number of positive
integers
• Input: the array of n positive integers a1, a2, …, an
• Output: the largest
• Example: Input 12 8 13 9 11 Output: 12 NGUYỄN KHÁNH PHƯƠNG
• Question: Design the algorithm to solve this problem SOICT– HUST
worst case
NGUYỄN KHÁNH PHƯƠNG NGUYỄN KHÁNH PHƯƠNG
SOICT– HUST average case SOICT– HUST
7
Kind of analyses Experimental Evaluation of Running Time
Best-case: best case • Write a program implementing the algorithm
average case
• T(n) = minimum time of algorithm on any input of size n. worst case • Run the program with inputs of varying size and composition
120
• Cheat with a slow algorithm that works fast on some input. 100
• Use a method like clock( ) to get an accurate measure of the actual running time
Average-case:
Running Time
80
20
• Very useful but often difficult to determine 0
• Plot the results
1000 2000 3000 4000
Worst-case: Input Size
9000
8000
• T(n) = maximum time of algorithm on any input of size n.
7000
• Easier to analyze 6000
Time (ms)
5000
4000
To evaluate the running time: 2 ways:
3000
• Experimental evaluation of running time 2000
0
0 50 100
NGUYỄN KHÁNH PHƯƠNG Input Size
SOICT– HUST
Limitations of Experiments when evaluating the running time of an algorithm Theoretical Analysis of Running Time
• Experimental evaluation of running time is very useful but • Uses a pseudo-code description of the algorithm instead of an
– It is necessary to implement the algorithm, which may be difficult implementation
• Characterizes running time as a function of the input size, n
– Results may not be indicative of the running time on other inputs
not included in the experiment • Takes into account all possible inputs
• Allows us to evaluate the speed of an algorithm independent of the
– In order to compare two algorithms, the same hardware and hardware/software environment (Changing the hardware/software
software environments must be used environment affects the running time by a constant factor, but does not
We need: Theoretical Analysis of Running Time alter the growth rate of the running time)
8
Contents 1.3. Asymptotic notation
1.1. Introductory Example Q, W, O, o, w
» What these symbols do are:
1.2. Algorithm and Complexity • give us a notation for talking about how fast a function goes to infinity,
which is just what we want to know when we study the running times of
1.3. Asymptotic notation algorithms.
1.4. Running time calculation • defined for functions over the natural numbers
• used to compare the order of growth of 2 functions
Example: f(n) = Q (n2): Describes how f(n) grows in comparison to n2.
» Instead of working out a complicated formula for the exact running time, we
can just say that the running time is for example Q(n2) [read as theta of n2]:
that is, the running time is proportional to n2 plus lower order terms. For
most purposes, that’s just what we want to know.
• For a given function g(n), we denote by 𝚯(g(n)) the set of functions Example 1: Show that 10n2 - 3n = 𝚯(n2)
f (n) : there exist positive constants c1, c2 , and n0 s.t.
Q( g (n)) • With which values of the constants n0, c1, c2 then the inequality in the
0 c1g ( n) f (n) c2 g ( n) for all n n0
Intuitively: Set of all functions that have the same rate of growth as g(n). definition of the theta notation is correct:
• A function f(n) belongs to the set 𝚯(g(n)) if there exist positive constants c1
and c2 such that it can be “sand- wiched” between c1g(n) and c2g(n) for 𝑐 𝑛 ≤ 𝑓(𝑛) = 10𝑛 − 3𝑛 ≤ 𝑐 𝑛 ∀n ≥ n0
sufficienly large n
• Suggestion: Make c1 a little smaller than the leading (the highest)
• 𝑓 𝑛 = 𝚯(g(n)) means that there exists some constant c1 and c2 s.t.
coefficient, and c2 a little bigger.
c1g(n) ≤ f(n) ≤ c2g(n) for large enough n.
• When we say that one function is theta of Select: c1 = 1, c2 = 11, n0 = 1 then we have
another, we mean that neither function goes
n2 ≤ 10n2 – 3n ≤ 11n2, with n ≥ 1.
to infinity faster than the other.
∀n ≥ 1: 10n2 - 3n = 𝚯(n2)
• Note: For polynomial functions: To compare the growth rate, it is necessary
to look at the term with the highest coefficient
9
f(n) = Q(g(n)) c1, c2 , n0 >0 : n n0 , c1g(n) f(n) c2g(n) f(n) = Q(g(n)) c1, c2 , n0 >0 : n n0 , c1g(n) f(n) c2g(n)
Example 2: Show that 𝑓 𝑛 = 𝑛 − 3𝑛 = 𝚯(𝑛 ) Example 3: Show that f(n) = 23n3 – 10 n2 log2n + 7n + 6 = 𝚯(𝑛 )
We must find n0, c1 and c2 such that
We must find n0, c1 and c2 such that
𝑐 𝑛 ≤ 𝑓(𝑛) = 𝑛 − 3𝑛 ≤ 𝑐 𝑛 ∀n ≥ n0 𝑐 n3 ≤ 𝑓(𝑛) = 23n3 – 10 n2 log2n + 7n + 6 ≤ 𝑐 n3n n0
g(n)=n
10
Big-Oh Examples Note
• The values of positive constants n0 and c are not unique when proof the
O(g(n)) = {f(n) : positive constants c and n0, such that
n n0, we have 0 f(n) cg(n) } asymptotic formulas
O(g(n)) = {f(n) : positive constants c and n0, such that O(g(n)) = {f(n) : positive constants c and n0, such that
n n0, we have 0 f(n) cg(n) } n n0, we have 0 f(n) cg(n) }
1,000,000
• Example 3: Show that 3n3 + 20n2 + 5 is O(n3) n^2
• Example 5: the function n2 is 100n
Need constants c and n0 such that 3n3 + 20n2 + 5 cn3 for n n0 not O(n)
100,000
10n
…… – n2 cn 10,000 n
1
1 10 100 1,000
NGUYỄN KHÁNH PHƯƠNG n
SOICT– HUST
11
Big-Oh and Growth Rate Inappropriate Expressions
• The big-Oh notation gives an upper bound on the growth rate of a
function
• The statement “f(n) is O(g(n))” means that the growth rate of f(n) is
O(g(n))
no more than the growth rate of g(n)
• We can use the big-Oh notation to rank functions according to their
growth rate
f (n)X
f(n) is O(g(n)) g(n) is O(f(n))
f (n)X
O(g(n))
g(n) grows more Yes No
f(n) grows more No Yes
Same growth Yes Yes
12
O Notation Examples Properties
• All these expressions are O(n): • If f(n) is O(g(n)) then af(n) is O(g(n)) for any a
• If f(n) is O(g1(n)) and h(n) is O(g2(n)) then
– n, 3n, 61n + 5, 22n – 5, …
• f(n)+h(n) is O(g1(n)+g2(n))
• All these expressions are O(n2): • f(n)h(n) is O(g1(n) g2(n))
– n2, 9 n2, 18 n2+ 4n – 53, … • If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) is O(h(n))
• All these expressions are O(n log n): • If p(n) is a polynomial in n then log p(n) is O(log(n))
• If p(n) is a polynomial of degree d, then p(n) is O(nd)
– n(log n), 5n(log 99n), 18 + (4n – 2)(log (5n + 3)), …
• nx = O(an), for any fixed x > 0 and a > 1
– An algorithm of order n to a certain power is better than an algorithm of order a
( > 1) to the power of n
• log nx is O(log n), for x > 0 – how?
• logxn is O(ny) for x > 0 and y > 0
– An algorithm of order logn (to a certain power) is better than an algorithm of n
raised to a power y.
13
Asymptotic notation in equations Asymptotic notation
Another way we use asymptotic notation is to simplify calculations:
• Use asymptotic notation in equations to replace expressions
containing lower-order terms
Example:
4n3 + 3n2 + 2n + 1 = 4n3 + 3n2 + Q(n)
= 4n3 + Q(n2) = Q(n3)
Graphic examples of 𝚯, O, and Ω
How to interpret?
In equations, Q(f(n)) always stands for an anonymous function g(n) Q(f(n))
Theorem: For any two functions f(n) and g(n), we have 𝑓 𝑛 = 𝚯(g(n))
– In this example, we use Q(n2) stands for 3n2 + 2n + 1
if and only if
𝑓 𝑛 = 𝑂(g(n)) and 𝑓 𝑛 = Ω(g(n))
Theorem: For any two functions f(n) and g(n), we have 𝑓 𝑛 = 𝚯(g(n)) Theorem: For any two functions f(n) and g(n), we have 𝑓 𝑛 = 𝚯(g(n))
if and only if 𝑓 𝑛 = 𝑂(g(n)) and 𝑓 𝑛 = Ω(g(n)) if and only if 𝑓 𝑛 = 𝑂(g(n)) and 𝑓 𝑛 = Ω(g(n))
Example 1: Show that f(n) = 5n2 = 𝚯(𝑛 ) Example 2: Show that f(n) = 3n2 – 2n + 5 = 𝚯(𝑛 )
Because: Because:
• 5n2 = O 𝑛2 3n2 – 2n + 5 = O 𝑛2
f(n) is O(g(n)) if there is a constant c > 0 and an integer constant n0 1 such that f(n) is O(g(n)) if there is a constant c > 0 and an integer constant n0 1 such that
f(n) ≤ cg(n) for n n0 f(n) ≤ cg(n) for n n0
let c = 5 and n0 = 1 pick c = ? and n0 = ?
• 5n2 = Ω(n2) 3n2 – 2n + 5 = Ω(n2)
f(n) is W(g(n)) if there is a constant c > 0 and an integer constant n0 1 such that f(n) is W(g(n)) if there is a constant c > 0 and an integer constant n0 1 such that
f(n) cg(n) for n n0 f(n) cg(n) for n n0
let c = 5 and n0 = 1 pick c = ? and n0 = ?
Therefore: f(n) = Q(n2) Therefore: f(n) = Q(n2)
𝑓 𝑛 = 𝚯(g(n)) f(n) = O(g(n)) f(n) = Ω(g(n)) 𝑓 𝑛 = 𝚯(g(n)) f(n) = O(g(n)) f(n) = Ω(g(n))
14
Exercise 1 Exercise 2
Show that: 100n + 5 ≠ W(n2) Show that: n ≠ Q(n2)
Exercise 3:Show that The way to talk about the running time
a) 6n3 ≠ Q(n2) • When people say “The running time for this algorithm is O(f(n))”, it means
Ans: Contradiction that the worst case running time is O(f(n)) (that is, no worse than c*f(n)
for large n, since big Oh notation gives an upper bound).
– Assume: 6n3 = Q(n2)
• It means the worst case running time could be determined by some
function g(n) O(f(n))
b) n ≠ Q(log2n) • When people say “The running time for this algorithm is W (f(n))”, it means
that the best case running time is W(f(n)) (that is, no better than c*f(n) for
Ans: Contradiction
large n, since big Omega notation gives a lower bound).
– Assume: n = Q(log2n)
• It means the best case running time could be determined by some
function g(n) W (f(n))
g ( n) :there exist positive constants c and n0 s.t.
NGUYỄN KHÁNH PHƯƠNG W( f (n))
SOICT– HUST 0 cf ( n) g ( n) for all n n0
15
o- Little oh notation w - Little omega notation
• For a given function g(n), we denote by o(g(n)) the set of functions • For a given function g(n), we denote by o(g(n)) the set of functions
f (n) :there exist positive constants c and n0 s.t. f ( n) :there exist positive constants c and n0 s.t.
o( g ( n)) w ( g (n))
0 f (n) cg (n) for all n n0 0 cg (n) f (n) for all n n0
f(n) becomes insignificant relative to g(n) as n approaches infinity: f(n) becomes arbitrarily large relative to g(n) as n approaches infinity:
lim [f(n) / g(n)] = 0 lim [f(n) / g(n)] =
n n
g(n) is an upper bound for f(n) that is not asymptotically tight. g(n) is a lower bound for f(n) that is not asymptotically tight.
16
Basic Functions Basic functions growth rates
n logn n nlogn n2 n3 2n
Which are more alike ? 4 2 4 8 16 64 16
8 3 8 24 64 512 256
17
The analogy between comparing functions and comparing numbers “Relatives” of notations
One thing you may have noticed by now is that these relations are kind • “Relatives” of the Big-Oh
of like the “<, >” relations for the numbers – W (g(n)): Big Omega – asymptotic lower bound
– Q (g(n)): Big Theta – asymptotic tight bound
fg ab
• Big-Omega – think of it as the inverse of O(n)
– f(n) is W (g(n)) if g(n) is O(f(n))
f (n) = Q(g(n)) a = b • Big-Theta – combine both Big-Oh and Big-Omega
f (n) = O(g(n)) a b – f(n) is Q (g(n)) if f(n) is O(g(n)) and g(n) is W (f(n))
f (n) = W(g(n)) a b • Make the difference:
f (n) = o(g(n)) a < b – 3n+3 is O(n) and is Q (n)
– 3n+3 is O(n2) but is not Q (n2)
f (n) = w (g(n)) a > b
• Little-oh – f(n) is o(g(n)) if f(n) is O(g(n)) and f(n) is not Q (g(n))
– 2n+3 is o(n2)
– 2n + 3 is o(n) ?
18
Exercise Properties
• Order the following functions by their asymptotic growth rates • Transitivity (truyền ứng)
1. nlog2n f(n) = Q(g(n)) & g(n) = Q(h(n)) f(n) = Q(h(n))
f(n) = O(g(n)) & g(n) = O(h(n)) f(n) = O(h(n))
2. log2n3
f(n) = W(g(n)) & g(n) = W(h(n)) f(n) = W(h(n))
3. n2
• Reflexivity
4. n2/5 f(n) = Q(f(n)) f(n) = O(g(n)) f(n) = W(g(n))
5. 𝟐𝒍𝒐𝒈𝟐𝒏
• Symmetry (đối xứng)
6. log2(log2n)
f(n) = Q(g(n)) if and only if g(n) = Q(f(n))
7. Sqr(log2n)
• Transpose Symmetry (Đối xứng chuyển vị)
Limits Exercise
• lim [f(n) / g(n)] = 0 f(n) o(g(n)) Show that
n
1) 3n2 – 100n + 6 = O(n2)
• lim [f(n) / g(n)] < f(n) O(g(n))
n 2) 3n2 – 100n + 6 = O(n3)
• 0 < lim [f(n) / g(n)] < f(n) Q(g(n)) 3) 3n2 – 100n + 6 ≠ O(n)
n
19
Final notes Contents
Even though in this course we focus on the
•
asymptotic growth using big-Oh notation,
Running time 1.1. Introductory Example
practitioners do care about constant factors
occasionally A 1.2. Algorithm and Complexity
• Suppose we have 2 algorithms
• Algorithm A has running time 30000n 1.3. Asymptotic notation
Algorithm B has running time 3n2 B
•
•
20
Running Time Calculations: General rules Some Examples
1. Consecutive Statements: The sum of running time of each segment.
• Running time of “P; Q”, where P is implemented first, then Q, is
Time(P; Q) = Time(P) + Time(Q) , Case1: for (i=0; i<n; i++)
or if using asymptotic Theta: for (j=0; j<n; j++) O(n2)
k++;
Time(P; Q) = Q(max(Time(P), Time(Q)).
Case 2: for (i=0; i<n; i++)
2. FOR loop: The number of iterations times the time of the inside statements. k++; O(n) work followed
for i =1 to m do P(i); for (i=0; i<n; i++) by O(n2) work, is
Assume running time of P(i) is t(i), then the running time of for loop is ∑ 𝑡(𝑖)
for (j=0; j<n; j++) also O (n2)
k++;
Case 3: for (int i=0; i<n-1; i++)
3. Nested loops: The product of the number of iterations times the time of the inside
for (int j=0; j<i; j++)
statements.
for i =1 to n do
int k+=1; O(n2)
for j =1 to m do P(j);
Assume the running time of P(j) is t(j), then the running time of this nested loops is:
21
Example: Calculating Fibonacci Sequences Exercise 1: Maximum Subarray Problem
function Fibrec(n) • Fibonacci Sequence: Given an array of integers A1, A2, …, AN, find the maximum value of
if n <2 then return n; – f0=0; ∑ 𝐴
else return Fibrec(n-1)+Fibrec(n-2);
– f1=1;
– fn= fn-1 + fn-2 For convenience, the maximum subsequence sum is zero if all the
function Fibiter(n) integers are negative.
i=0;
j=1;
for k=1 to n do
j=i + j; Characteristic statement
i=j – i;
• The number of times this characteristic statement being
return j;
executed is n The running time of Fibiter is O(n)
n 10 20 30 50 100
Fibrec 8ms 1sec 2min 21days 109years
O(n2)
sum += a[j];
if (sum > maxSum)
maxSum = sum;
Select the statement sum+=a[k] as the characteristic statement }
Running time of the algorithm: O(n3) }
22
Algorithm 3. Dynamic programming Exercise 2: Selection sort
The primary steps of dynamic programming: • Sort a sequence of numbers in ascending order
1. Divide: • Algorithm:
• Define si the value of max subarray of the array a0, a1, ..., ai , i = 0, 1, ..., n-1. – Find the smallest and move it to the first place
• Clearly, sn-1 is the solution. – Find the next smallest and move it to the second place
– Find the next smallest and move it to the 3rd place
3. Construct the final solution:
– …
• s0 = a0
void selectionSort(int a[], int n){
• Assume i > 0 and we already know the value of sk with k = 0, 1, ..., i-1. Now we need to calculate the value of si int i, j, index_min;
which is the value of max subarray of the array: for (i = 0; i < n-1; i++) {
a0, a1, ..., ai-1, ai . index_min = i;
//Find the smallest element from a[i+1] till the last element
• We see that: the max subarray of this array a0, a1, ..., ai-1, ai could either include the element ai or not include for (j = i+1; j < n; j++)
the element ai therefore, the max subarray of the array a0, a1, ..., ai-1, ai could only be one of these 2 arrays: if (a[j] < a[index_min]) index_min = j;
//move the element a[index_min] to the ith place:
– The max subarray of the array a0, a1, ..., ai-1 swap(a[i], a[index_min]);
– The max subarray of the array a0, a1, ..., ai ending at ai. }
}
Thus, we have si = max {si-1, ei}, i = 1,2, …, n-1.
where ei is the value of the max subarray a0, a1, ..., ai ending at ai. void swap(int &a,int &b)
To calculate ei, we could use the recursive relation: {
int temp = a;
– e0 = a0; a = b;
– ei = max {ai, ei-1 + ai}, i = 1,2, ..., n-1. b = temp;
}
Exercise 3 Exercise 4
• Give asymptotic big-Oh notation for the running time T(n) • Give asymptotic big-Oh notation for the running time T(n) of the following
statement segment:
of the following statement segment:
for (int i = 1; i<=n; i++) a) int x = 0;
for (int j = 1; j<= i*i*i; j++) for (int i = 1; i <=n; i *= 2)
for (int k = 1; k<=n; k++)
x=x+1;
x = x + 1;
• Ans:
• Ans: int x = 0;
for (int i = n; i > 0; i /= 2)
x=x+1;
• Ans:
23
Exercise 5
Give asymptotic big-Oh notation for the running time T(n) of the following
statement segment:
int n;
if (n<1000)
for (int i=0; i<n; i++)
for (int j=0; j<n; j++)
for (int k=0; k<n; k++)
cout << "Hello\n";
else
for (int j=0; j<n; j++)
for (int k=0; k<n; k++)
cout << "world!\n";
Ans:
• T(n) is the constant when n<1000. T(n) = O(n2).
24