Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views24 pages

Lecture 2

The document discusses asymptotic notation and algorithmic time complexity, focusing on the knapsack problem and various algorithms to solve it. It explains the RAM model of computation, worst-case, best-case, and average-case complexities, emphasizing the importance of worst-case analysis for algorithm efficiency. Additionally, it covers the definitions and examples of Big O, Big Omega, and Big Theta notations, illustrating how to analyze and compare the complexities of algorithms.

Uploaded by

romyaanand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views24 pages

Lecture 2

The document discusses asymptotic notation and algorithmic time complexity, focusing on the knapsack problem and various algorithms to solve it. It explains the RAM model of computation, worst-case, best-case, and average-case complexities, emphasizing the importance of worst-case analysis for algorithm efficiency. Additionally, it covers the definitions and examples of Big O, Big Omega, and Big Theta notations, illustrating how to analyze and compare the complexities of algorithms.

Uploaded by

romyaanand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Lecture 2:

Asymptotic Notation
Steven Skiena

Department of Computer Science


State University of New York
Stony Brook, NY 11794–4400

http://www.cs.stonybrook.edu/˜skiena
Topic: Problem of the Day
Problem of the Day
The knapsack problem is as follows: given a set of integers
S = {s1, s2, . . . , sn}, and a given target number T , find a
subset of S which adds up exactly to T . For example, within
S = {1, 2, 5, 9, 10} there is a subset which adds up to T = 22
but not T = 23.
Find counterexamples to each of the following algorithms for
the knapsack problem. That is, give an S and T such that
the subset is selected using the algorithm does not leave the
knapsack completely full, even though such a solution exists.
Solution

• Put the elements of S in the knapsack in left to right order


if they fit, i.e. the first-fit algorithm?
• Put the elements of S in the knapsack from smallest to
largest, i.e. the best-fit algorithm?
• Put the elements of S in the knapsack from largest to
smallest?
Questions?
Topic: Algorithmic Time Complexity
The RAM Model of Computation
Algorithms are an important and durable part of computer
science because they can be studied in a machine/language
independent way.
This is because we use the RAM model of computation for
all our analysis.
• Each “simple” operation (+, -, =, if, call) takes 1 step.
• Loops and subroutine calls are not simple operations.
They depend upon the size of the data and the contents
of a subroutine. “Sort” is not a single step operation.
• Each memory access takes exactly 1 step.
We measure the run time of an algorithm by counting the
number of steps.
This model is useful and accurate in the same sense as the
flat-earth model (which is useful)!
Worst-Case Complexity
The worst case complexity of an algorithm is the function
defined by the maximum number of steps taken on any
instance of size n.
Number
of Steps Worst Case

Average Case

Best Case

1 2 3 4 ...... n

Problem Size
Best-Case and Average-Case Complexity
The best case complexity of an algorithm is the function
defined by the minimum number of steps taken on any
instance of size n.
The average-case complexity of the algorithm is the function
defined by an average number of steps taken on any instance
of size n.
Each of these complexities defines a numerical function: time
vs. size!
Our Position on Complexity Analysis
What would the reasoning be on buying a lottery ticket on the
basis of best, worst, and average-case complexity?
Generally speaking, we will use the worst-case complexity as
our preferred measure of algorithm efficiency.
Worst-case analysis is generally easy to do, and “usually”
reflects the average case. Assume I am asking for worst-
case analysis unless otherwise specified!
Randomized algorithms are of growing importance, and
require an average-case type analysis to show off their merits.
Questions?
Topic: The Big Oh Notation
Exact Analysis is Hard!
Best, worst, and average case are difficult to deal with
because the precise function details are very complicated:
f(n)
upper bound
n0

lower bound

1 2 3 4 .......
n

It easier to talk about upper and lower bounds of the function.


Asymptotic notation (O, Θ, Ω) are as well as we can
practically deal with complexity functions.
Names of Bounding Functions

• g(n) = O(f (n)) means C × f (n) is an upper bound on


g(n).
• g(n) = Ω(f (n)) means C ×f (n) is a lower bound on g(n).
• g(n) = Θ(f (n)) means C1 × f (n) is an upper bound on
g(n) and C2 × f (n) is a lower bound on g(n).
C, C1, and C2 are all constants independent of n.
O, Ω, and Θ

The definitions imply a constant n0 beyond which they are


satisfied. We do not care about small values of n.
Formal Definitions

• f (n) = O(g(n)) if there are positive constants n0 and c


such that to the right of n0, the value of f (n) always lies
on or below c · g(n).
• f (n) = Ω(g(n)) if there are positive constants n0 and c
such that to the right of n0, the value of f (n) always lies
on or above c · g(n).
• f (n) = Θ(g(n)) if there exist positive constants n0, c1, and
c2 such that to the right of n0, the value of f (n) always lies
between c1 · g(n) and c2 · g(n) inclusive.
Questions?
Topic: Working with the Big Oh
Big Oh Examples

3n2 − 100n + 6 = O(n2) because 3n2 > 3n2 − 100n + 6


3n2 − 100n + 6 = O(n3) because .01n3 > 3n2 − 100n + 6
3n2 − 100n + 6 6= O(n) because c · n < 3n2 when n > c

Think of the equality as meaning in the set of functions.


Big Omega Examples

3n2 − 100n + 6 = Ω(n2) because 2.99n2 < 3n2 − 100n + 6


3n2 − 100n + 6 6= Ω(n3) because 3n2 − 100n + 6 < n3
10
3n2 − 100n + 6 = Ω(n) because 1010 n < 3n2 − 100n + 6
Big Theta Examples

3n2 − 100n + 6 = Θ(n2) because O and Ω


3n2 − 100n + 6 6= Θ(n3) because O only
3n2 − 100n + 6 6= Θ(n) because Ω only
Big Oh Addition/Subtraction

Suppose f (n) = O(n2) and g(n) = O(n2).


• What do we know about g 0(n) = f (n) + g(n)? Adding the
bounding constants shows g 0(n) = O(n2).
• What do we know about g 00(n) = f (n) − |g(n)|? Since
the bounding constants don’t necessary cancel, g 00(n) =
O(n2)
We know nothing about the lower bounds on g 0 and g 00
because we know nothing about lower bounds on f and g.
Questions?

You might also like