Time and Space
Complexity Analysis
of Algorithm
What is an Algorithm?
In computer science, whenever we want to solve some
computational problem then we define a set of steps that need
to be followed to solve that problem. These steps are
collectively known as an algorithm.
For example, you have two integers "a" and "b" and you want
to find the sum of those two number. How will you solve
this? One possible solution for the above problem can be:
Take two integers as input
create a variable "sum" to store the sum of two integers
put the sum of those two variables in the "sum" variable
return the "sum" variable
What do you mean by a good Algorithm?
There can be many algorithms for a particular problem. So, how
will you classify an algorithm to be good and others to be bad?
Let's understand the properties of a good algorithm:
Correctness: An algorithm is said to be correct if for every set of input it
halts with the correct output. If you are not getting the correct output for
any particular set of input, then your algorithm is wrong.
Finiteness: Generally, people ignore this but it is one of the important
factors in algorithm evaluation. The algorithm must always terminate
after a finite number of steps. For example, in the case of recursion and
loop, your algorithm should terminate otherwise you will end up having
a stack overflow and infinite loop scenario respectively.
Efficiency: An efficient algorithm is always used. By the term efficiency,
we mean to say that:
What do you mean by a good Algorithm?
The algorithm should efficiently use the resources available to the
system.
The computational time (the time taken to generate an output
corresponding to a particular input) should be as less as possible.
The memory used by the algorithm should also be as less as
possible. Generally, there is a trade-off between computational
time and memory. So, we need to find if the time is more
important than space or vice-versa and then write the algorithm
accordingly.
So, we have seen the three factors that can be used to evaluate an
algorithm. Out of these three factors, the most important one is
the efficiency of algorithms. So let's dive deeper into the
efficiency of the algorithm.
Algorithm Efficiency
The efficiency of an algorithm is mainly defined by
two factors i.e. space and time.
A good algorithm is one that is taking less time
and less space, but this is not possible all the time.
There is a trade-off between time and space. If you
want to reduce the time, then space might
increase.
Similarly,if you want to reduce the space, then the
time may increase. So, you have to compromise
with either space or time. Let's learn more about
space and time complexity of algorithms.
Space Complexity and Time Complexity
Space Complexity
Space Complexity of an algorithm denotes the total space used or
needed by the algorithm for its working, for various input sizes.
Time Complexity
The time complexity is the number of operations an algorithm performs to
complete its task with respect to input size (considering that each
operation takes the same amount of time). The algorithm that performs
the task in the smallest number of operations is considered the most
efficient one.
Input Size: Input size is defined as total number of elements present in the
input. For a given problem we characterize the input size n approproately.
For example:
Sorting problem: Total number of item to be sorted
Graph Problem: Total number of vertices and edges
Numerical Problem: Total number of bits needed to represent a number
Space Complexity and Time Complexity
The time taken by an algorithm also depends on the
computing speed of the system that you are using, but we
ignore those external factors and we are only concerned on
the number of times a particular statement is being executed
with respect to the input size.
Let's say, for executing one statement, the time taken is 1sec,
then what is the time taken for executing n statements, It will
take n seconds.
One thing that is to be noted here is that we are finding the
time taken by different algorithms for the same input because
if we change the input then the efficient algorithm might take
more time as compared to the less efficient one because the
input size is different for both algorithms.
Asymptotic notation
we have seen that we can't judge an algorithm by
calculating the time taken during its execution in a
particular system.
We need some standard notation to analyze the
algorithm. We use Asymptotic notation to analyze
any algorithm and based on that we find the most
efficient algorithm.
Here in Asymptotic notation, we do not consider the
system configuration, rather we consider the order
of growth of the input. We try to find how the time
or the space taken by the algorithm will
increase/decrease after increasing/decreasing the
input size.
Asymptotic notation
There are three asymptotic notations that are used
to represent the time complexity of an algorithm.
They are:
Θ Notation (theta)
Big O Notation
Ω Notation
Before learning about these three asymptotic
notation, we should learn about the best, average,
and the worst case of an algorithm.
Best case,
Average case,
Worst case
Best case, Average case, and Worst case
Best case: This is the lower bound on running time of an algorithm. We must
know the case that causes the minimum number of operations to be
executed. In the above example, our array was [1, 2, 3, 4, 5] and we are
finding if "1" is present in the array or not. So here, after only one
comparison, you will get that your element is present in the array. So, this is
the best case of your algorithm.
Average case: We calculate the running time for all possible inputs, sum all
the calculated values and divide the sum by the total number of inputs. We
must know (or predict) distribution of cases.
Worst case: This is the upper bound on running time of an algorithm. We must
know the case that causes the maximum number of operations to be
executed. In our example, the worst case can be if the given array is [1, 2, 3,
4, 5] and we try to find if element "6" is present in the array or not. Here, the
if-condition of our loop will be executed 5 times and then the algorithm will
give "0" as output.
Θ Notation (theta)
The Θ Notation is used to find the average bound of an algorithm i.e.
it defines an upper bound and a lower bound, and your algorithm
will lie in between these levels. So, if a function is g(n), then the
theta representation is shown as Θ(g(n)) and the relation is shown as:
Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0
such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }
The above expression can be read as theta of g(n) is defined as set of
all the functions f(n) for which there exists some positive constants
c1, c2, and n0 such that c1*g(n) is less than or equal to f(n) and f(n)
is less than or equal to c2*g(n) for all n that is greater than or equal
to n0.
For example:
if f(n) = 2n² + 3n + 1 and g(n) = n²
then for c1 = 2, c2 = 6, and n0 = 1, we can say that f(n) = Θ(n²)
Ω Notation
The Ω notation denotes the lower bound of an algorithm i.e. the time
taken by the algorithm can't be lower than this. In other words, this is the
fastest time in which the algorithm will return a result. Its the time taken
by the algorithm when provided with its best-case input. So, if a function
is g(n), then the omega representation is shown as Ω(g(n)) and the
relation is shown as:
Ω(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
The above expression can be read as omega of g(n) is defined as set of all
the functions f(n) for which there exist some constants c and n0 such that
c*g(n) is less than or equal to f(n), for all n greater than or equal to n0.
For example:
if f(n) = 2n² + 3n + 1 and g(n) = n²
then for c = 2 and n0 = 1, we can say that f(n) = Ω(n²)
Big O Notation
The Big O notation defines the upper bound of any algorithm i.e. you
algorithm can't take more time than this time. In other words, we can say
that the big O notation denotes the maximum time taken by an algorithm or
the worst-case time complexity of an algorithm. So, big O notation is the
most used notation for the time complexity of an algorithm. So, if a
function is g(n), then the big O representation of g(n) is shown as O(g(n))
and the relation is shown as:
O(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
The above expression can be read as Big O of g(n) is defined as a set of
functions f(n) for which there exist some constants c and n0 such that f(n) is
greater than or equal to 0 and f(n) is smaller than or equal to c*g(n) for all n
greater than or equal to n0.
For example:
if f(n) = 2n² + 3n + 1 and g(n) = n²
then for c = 6 and n0 = 1, we can say that f(n) = O(n²)