Computational
Complexity
Algorithm
An algorithm comprises a series of computational instructions
that convert the input into the desired output.
Why analyze algorithm
Examining an algorithm now involves estimating the resources it will
demand. While resources like memory, communication bandwidth, or
logic gates may be crucial in some cases, the primary focus is typically
on measuring the computational time.
By analyzing an algorithm, we get an idea of how the algorithm will
scale with the variation (change) in the input size. Another reason for
analysis of the algorithm is to compare various algorithms for solving
the same problem. Through this, we can determine which algorithm
will suit best the problem.
Factors considered in analysis
Time Space
Speed of Total Quality of
Operating Auxiliary
the amount of the
System storage
computer RAM computer
Factors considered in analysis:
Two factors considered while analyzing algorithms are time and space.
• Time: Using the actual run time on a specific computer isn't a reliable
basis for comparison, as it heavily relies on various factors like the
computer's speed, available RAM, operating system, and overall
quality. Instead, the complexity of an algorithm is determined by the
number of basic instructions executed in relation to the size of the
input data.
• Space: This factor holds less significance compared to time, as if
additional space is needed, it can always be obtained in the form of
auxiliary storage.
What is computational complexity
The amount of resources required to run an algorithm.
What is Big O Notation
Big O notation describes the complexity of your code using algebraic terms.
Big O notation is one of the most fundamental tools for computer scientists to
analyze the cost of an algorithm. Big-O is a standard mathematical notation that
shows how efficient an algorithm is in the worst-case scenario relative to its input
size. To measure the efficiency of an algorithm, we need to consider two things:
Time Complexity: How much time does it take to run completely?
Space Complexity: How much extra space does it require in the process?
Big-O notation captures the upper bound to show how much time or space an
algorithm would require in the worst-case scenario as the input size grows. It is
usually written as:
f(n)=O(inputSize)
Dominant Term
The dominant term is the
term that grows the
fastest.
For example, n² grows
faster than n, so if we have
something like g(n) = n² +
5n + 6, it will be big O(n²).
Best,
Average,
Worst
Complexity
Degree of Complexity
• Worst Case Analysis (Usually Done)
In the worst-case analysis, we calculate upper bound on running time of an
algorithm. We must know the case that causes maximum number of
operations to be executed. For Linear Search, the worst case happens when
the element to be searched (x in the above code) is not present in the
array.
• Average Case Analysis (Sometimes done)
In average case analysis, we take all possible inputs and calculate
computing time for all of the inputs. Sum all the calculated values and
divide the sum by total number of inputs.
• Best Case Analysis (Bogus)
In the best-case analysis, we calculate lower bound on running time of an
algorithm. We must know the case that causes minimum number of
operations to be executed. In the linear search problem, the best case
occurs when x is present at the first location.
Examples of Big O notation
Constant Time O(1)
• Constant time algorithms are described as O(1) in big-O notation. These
algorithms will always execute in some fixed time regardless of the size of
the input.
Linear time O(N)
• In this type of algorithms, the run time scales linearly in response to the
input size (N).
Quadratic Time O(N2)
• Quadratic time is common for algorithms that involve nested iterations
over the data set. Deeper nested iterations will result in O(N2), O(N4) etc.
This also includes simple sorting algorithms, such as bubble sort, selection
sort, and insertion sort.
Steps for Analysing Algorithms
The general step wise procedure for Big-O runtime analysis is as
follows:
• Figure out what the input size is and what n represents.
• Express the maximum number of operations, the algorithm performs
in terms of n.
• Ignore all excluding the highest order terms.
• Ignore all the constant factors.
Classification of Algorithms from the best-to-worst
performance (Running Time Complexity):
A logarithmic algorithm – O(log n)
Runtime grows logarithmically in proportion to n.
A linear algorithm – O(n)
Runtime grows directly in proportion to n.
A superlinear algorithm – O(n log n)
Runtime grows in proportion to n.
A polynomial algorithm – O(nc)
Runtime grows quicker than previous all based on n.
A exponential algorithm – O(2n)
Runtime grows even faster than polynomial algorithm based on n.
A factorial algorithm – O(n!)
Runtime grows the fastest and becomes quickly unusable for even small
values of n.
Big O
Notation
Big O is a relative representation of the complexity of an
algorithm. More specifically, the order of growth in the time
of execution depending on the length of the input.
Sources:
• http://staff.ustc.edu.cn/~csli/graduate/algorithms/book6/chap01.htm
• https://kldavenport.com/complexity-computability-part1/
• https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/
• https://www.geeksforgeeks.org/analysis-of-algorithms-set-2-asymptotic-analysis/
Check this one:
• https://www.freecodecamp.org/news/asymptotic-analysis-explained-with-pokemon-a-deep-dive-into-complexity-analysis-8bf4396804e0/
video
• https://www.youtube.com/watch?v=MyeV2_tGqvw