Data Structures: Unit I
Data Structures: Unit I
UNIT I: Introduction to Data Structures: Introduction to the Theory of Data Structures, Data
Representation, Abstract Data Types, Data Types, Primitive Data Types, Data Structure and Structured
Type, Atomic Type, Difference between Abstract Data Types, Data Types, and Data Structures,
Refinement Stages. Principles of Programming and Analysis of Algorithms: Software Engineering,
Program Design, Algorithms, Different Approaches to Designing an Algorithm, Complexity, Big ‘O’
Notation, Algorithm Analysis, Structured Approach to Programming, Recursion, Tips and Techniques for
Writing Programs in ‘C’.
UNIT II: Arrays: Introduction to Linear and Non- Linear Data Structures, One- Dimensional Arrays, Array
Operations, Two- Dimensional arrays, Multidimensional Arrays, Pointers and Arrays, an Overview of
Pointers. Linked Lists: Introduction to Lists and Linked Lists, Dynamic Memory Allocation, Basic Linked
List Operations, Doubly Linked List, Circular Linked List, Atomic Linked List, Linked List in Arrays, Linked
List versus Arrays.
UNIT III: Stacks: Introduction to Stacks, Stack as an Abstract Data Type, Representation of Stacks through
Arrays, Representation of Stacks through Linked Lists, Applications of Stacks,Stacks and Recursion.
Queues: Introduction, Queue as an Abstract data Type, Representation of Queues, Circular Queues,
Double Ended Queues- Deques, Priority Queues, Application of Queues.
UNIT IV: Binary Trees: Introduction to Non- Linear Data Structures, Introduction Binary Trees, Types of
Trees, Basic Definition of Binary Trees, Properties of Binary Trees, Representation of Binary Trees,
Operations on a Binary Search Tree, Binary Tree Traversal, Counting Number of Binary Trees, Applications
of Binary Tree.
UNIT V: Searching and sorting: Sorting – An Introduction, Bubble Sort, Insertion Sort, Merge Sort,
Searching – An Introduction, Linear or Sequential Search, Binary Search, Indexed Sequential Search
Graphs: Introduction to Graphs, Terms Associated with Graphs, Sequential Representation of Graphs,
Linked Representation of Graphs, Traversal of Graphs, Spanning Trees, Shortest Path, Application of
Graphs.
UNIT I
OR
A data structure is defined as a particular way of storing and organizing data in our devices to use the
data efficiently and effectively. The main idea behind using data structures is to minimize the time and
space complexities. An efficient data structure takes minimum memory space and requires minimum
time to execute the data.
A data structure Is not only used for organizing the data. It is also used for processing, retrieving, and
storing data. There are different basic and advanced types of data structures that are used in almost
every program or software system that has been developed.
Importance:
1. Efficient data processing: Data structures provide a way to organize and store data in a way that
allows for efficient retrieval, manipulation, and storage of data. For example, using a hash table
to store data can provide constant-time access to data.
2. Memory management: Proper use of data structures can help to reduce memory usage and
optimize the use of resources. For example, using dynamic arrays can allow for more efficient
use of memory than using static arrays.
3. Code reusability: Data structures can be used as building blocks in various algorithms and
programs, making it easier to reuse code.
4. Abstraction: Data structures provide a level of abstraction that allows programmers to focus on
the logical structure of the data and the operations that can be performed on it, rather than on
the details of how the data is stored and manipulated.
5. Algorithm design: Many algorithms rely on specific data structures to operate efficiently.
Understanding data structures is crucial for designing and implementing efficient algorithms.
Data Representation
Computer does not understand human language. Any data, viz., letters, symbols, pictures, audio, videos,
etc., fed to computer should be converted to machine language first. Computers represent data in the
following three forms –
Number System
We are introduced to concept of numbers from a very early age. To a computer, everything is a number,
i.e., alphabets, pictures, sounds, etc., are numbers. Number system is categorized into four types –
• Binary number system consists of only two values, either 0 or 1
• Octal number system represents values in 8 digits.
• Decimal number system represents values in 10 digits.
• Hexadecimal number system represents values in 16 digits.
Number System
Binary 2 01
Octal 8 01234567
Decimal 10 0123456789
Hexadecimal 16 0123456789ABCDEF
Bits – A bit is a smallest possible unit of data that a computer can recognize or use. Computer usually
uses bits in groups.
Bytes – group of eight bits is called a byte. Half a byte is called a nibble.
1 Byte 8 Bits
Text Code
Text code is format used commonly to represent alphabets, punctuation marks and other symbols. Four
most popular text code systems are –
EBCDIC
ASCII
Extended ASCII
Unicode
EBCDIC
Extended Binary Coded Decimal Interchange Code is an 8-bit code that defines 256 symbols. Given
below is the EBCDIC Tabular column
ASCII
American Standard Code for Information Interchange is an 8-bit code that specifies character values
from 0 to 127.
Extended ASCII
Extended American Standard Code for Information Interchange is an 8-bit code that specifies character
values from 128 to 255.
Unicode
Unicode Worldwide Character Standard uses 4 to 32 bits to represent letters, numbers and symbol.
Data Types
Data structures can be classified into two main types: primitive data structures and non-primitive data
structures.
Primitive data structures: These are the most basic data structures and are usually built into
programming languages. Examples include:
Integer
Float
Character
Boolean
Double
Void
Non-primitive data structures: These are complex data structures that are built using primitive
data types. Non-primitive data structures can be further categorized into the following types:
Arrays: A collection of elements of the same data type, stored in contiguous memory locations.
Linked lists: A collection of elements that are connected by links or pointers.
Stacks: A collection of elements that follow the Last-In-First-Out (LIFO) principle.
Queues: A collection of elements that follow the First-In-First-Out (FIFO) principle.
Trees: A hierarchical data structure consisting of nodes connected by edges.
Graphs: A non-linear data structure consisting of nodes and edges.
The choice of data structure depends on the problem to be solved and the operations to be performed
on the data. Different data structures have different strengths and weaknesses and are suitable for
different scenarios. Understanding the different types of data structures and their characteristics is
important for efficient algorithm design and implementation.
• Abstract Data type (ADT) is a type (or class) for objects whose behavior is defined by a set of
values and a set of operations. The definition of ADT only mentions what operations are to be
performed but not how these operations will be implemented. It does not specify how data will
be organized in memory and what algorithms will be used for implementing the operations. It is
called “abstract” because it gives an implementation-independent view.
• The process of providing only the essentials and hiding the details is known as abstraction.
• The user of data type does not need to know how that data type is implemented, for example,
we have been using Primitive values like int, float, char data types only with the knowledge that
these data type can operate and be performed on without any idea of how they are
implemented.
• So a user only needs to know what a data type can do, but not how it will be implemented. Think
of ADT as a black box which hides the inner structure and design of the data type. Now we’ll
define three ADTs namely List ADT, Stack ADT, Queue ADT.
1. List ADT
Views of list
• The data is generally stored in key sequence in a list which has a head structure consisting of
count, pointers and address of compare function needed to compare the data in the list.
• The data node contains the pointer to a data structure and a self-referential pointer which points
to the next node in the list.
• The List ADT Functions is given below:
• Get() – Return an element from the list at any given position.
• Insert() – Insert an element at any position of the list.
• Remove() – Remove the first occurrence of any element from a non-empty list.
• removeAt() – Remove the element at a specified location from a non-empty list.
• Replace() – Replace an element at any position by another element.
• Size() – Return the number of elements in the list.
• isEmpty() – Return true if the list is empty, otherwise return false.
• isFull() – Return true if the list is full, otherwise return false.
2. Stack ADT
View of stack
• In Stack ADT Implementation instead of data being stored in each node, the pointer to data is
stored.
• The program allocates memory for the data and address is passed to the stack ADT.
• The head node and the data nodes are encapsulated in the ADT. The calling function can only
see the pointer to the stack.
• The stack head structure also contains a pointer to top and count of number of entries currently
in stack.
• Push() – Insert an element at one end of the stack called top.
• Pop() – Remove and return the element at the top of the stack, if it is not empty.
• Peek() – Return the element at the top of the stack without removing it, if the stack is not empty.
• Size() – Return the number of elements in the stack.
• isEmpty() – Return true if the stack is empty, otherwise return false.
• isFull() – Return true if the stack is full, otherwise return false.
3. Queue ADT
view of Queue
• The queue abstract data type (ADT) follows the basic design of the stack abstract data type.
• Each node contains a void pointer to the data and the link pointer to the next element in the
queue. The program’s responsibility is to allocate memory for storing the data.
• Enqueue() – Insert an element at the end of the queue.
• Dequeue() – Remove and return the first element of the queue, if the queue is not empty.
• Peek() – Return the element of the queue without removing it, if the queue is not empty.
• Size() – Return the number of elements in the queue.
• isEmpty() – Return true if the queue is empty, otherwise return false.
• isFull() – Return true if the queue is full, otherwise return false.
Abstract data types (ADTs) have several advantages and disadvantages that should be considered when
deciding to use them in software development. Here are some of the main advantages and
disadvantages of using ADTs:
Advantages:
• Encapsulation: ADTs provide a way to encapsulate data and operations into a single unit, making
it easier to manage and modify the data structure.
• Abstraction: ADTs allow users to work with data structures without having to know the
implementation details, which can simplify programming and reduce errors.
• Data Structure Independence: ADTs can be implemented using different data structures, which
can make it easier to adapt to changing needs and requirements.
• Information Hiding: ADTs can protect the integrity of data by controlling access and preventing
unauthorized modifications.
• Modularity: ADTs can be combined with other ADTs to form more complex data structures,
which can increase flexibility and modularity in programming.
Disadvantages:
• Overhead: Implementing ADTs can add overhead in terms of memory and processing, which can
affect performance.
• Complexity: ADTs can be complex to implement, especially for large and complex data
structures.
• Learning Curve: Using ADTs requires knowledge of their implementation and usage, which can
take time and effort to learn.
• Limited Flexibility: Some ADTs may be limited in their functionality or may not be suitable for all
types of data structures.
• Cost: Implementing ADTs may require additional resources and investment, which can increase
the cost of development.
A Data structure is a collection of different types of data on which a specific set of operations can be
performed. It is a collection of different data types. It is a way of organizing the data in memory. The
various operations that can be performed on a data structure are insertion, deletion, and traversal. For
example, if we want to store the data of many students where each student has a student name, student
id, and a mobile number. To store such big data requires complex data management, which requires a
data structure comprising multiple primitive data types.
The Implementation of data type is known as an The implementation of data structure is known as
abstract implementation. a concrete implementation.
It can hold value but not data. Therefore, we can It can hold multiple types of data within a single
say that it is data-less. object.
In case of data type, a value can be assigned In the case of data structure, some operations are
directly to the variables. used to assign the data to the data structure
object.
`There is no problem in the time complexity. When we deal with a data structure
object, time complexity plays an important role.
The examples of data type are int, float, char. The examples of data structure are stack, queue,
tree, graph.
Difference between Abstract data type , data type and data structure
• A data type is a type together with a collection of operations to manipulate the type. For
example, an integer variable is a member of the integer data type. Addition is an example of an
operation on the integer data type.
• A distinction should be made between the logical concept of a data type and its physical
implementation in a computer program. For example, there are two traditional implementations
for the list data type: the linked list and the array-based list. The list data type can therefore be
implemented using a linked list or an array. But we don’t need to know how the list is
implemented when we wish to use a list to help in a more complex design. For example, a list
might be used to help implement a graph data structure.
• As another example, the term “array” could refer either to a data type or an implementation.
“Array” is commonly used in computer programming to mean a contiguous block of memory
locations, where each memory location stores one fixed-length data item. By this meaning, an
array is a physical data structure. However, array can also mean a logical data type composed of
a (typically homogeneous) collection of data items, with each data item identified by an index
number. It is possible to implement arrays in many different ways besides as a block of
contiguous memory locations. The sparse matrix refers to a large, two-dimensional array that
stores only a relatively few non-zero values. This is often implemented with a linked structure, or
possibly using a hash table. But it could be implemented with an interface that uses traditional
row and column indices, thus appearing to the user in the same way that it would if it had been
implemented as a block of contiguous memory locations.
• An abstract data type (ADT) is the specification of a data type within some language,
independent of an implementation. The interface for the ADT is defined in terms of a type and a
set of operations on that type. The behavior of each operation is determined by its inputs and
outputs. An ADT does not specify how the data type is implemented. These implementation
details are hidden from the user of the ADT and protected from outside access, a concept
referred to as encapsulation.
• A data structure Is the implementation for an ADT. In an object-oriented language, an ADT and
its implementation together make up a class. Each operation associated with the ADT is
implemented by a member function or method. The variables that define the space required by
a data item are referred to as data members. An object is an instance of a class, that is,
something that is created and takes up storage during the execution of a computer program.
The boolean type bool can have one of two values t or f. The standard logical operations (eg.
Not, and, or, xor, nor, nand) are predefined. The operations and, or, xor, nor, nand all use infix
notation. For example:
The integer type int is the set of (positive and negative) integers that can be represented in the
fixed precision of a machine-sized word. The exact precision is machine dependent, but will
always be at least 32-bits. The standard functions on integers (+, -, *, /, ==, >, <, negate, …) are
predefined, and use infix notation (see Appendix A for the precedence rules).
The character type char is the set of ASCII characters. The characters have a fixed order and all
the comparison operations (eg. ==, <, >=,…) can beused. Characters are written by
placing a ` in front of the character. For example:
The global variables space, newline and tab are bound to the appropriate characters.
The type float is used to specify floating-point numbers. The exact representation of these
numbers is machine specific, but NESL\ tries to use 64-bit IEEE when possible. Floats support
most of the same functions as integers, and also have several additional functions (eg. round,
truncate, sqrt, log,...). Floats must be written by placing a decimal point in them so that they can
be distinguished from integers.
There is no implicit coercion between scalar types. To add 2 and 3.0, for example, it is necessary
to coerce one of them.
Large software - It is easier to build a wall than a house or building, likewise, as the size of the
software becomes large, engineering has to step to give it a scientific process.
Scalability- If the software process were not based on scientific and engineering concepts, it
would be easier to re-create new software than to scale an existing one.
Cost- As hardware industry has shown its skills and huge manufacturing has lower down the
price of computer and electronic hardware. But, cost of the software remains high if proper
process is not adapted.
Dynamic Nature- Always growing and adapting nature of the software hugely depends upon the
environment in which the user works. If the nature of software is always changing, new
enhancements need to be done in the existing one. This is where the software engineering plays
a good role.
Quality Management- Better process of software development provides better and quality
software product.
Characteristics:
Operational
This tells us how well the software works in operations. It can be measured on:
Budget
Usability
Efficiency
Correctness
Functionality
Dependability
Security
Safety
Transitional
This aspect is important when the software is moved from one platform to
Another:
Portability
Interoperability
Reusability
Adaptability
Maintenance
This aspect briefs about how well the software has the capabilities to maintain
Itself in the ever-changing environment:
Modularity
Maintainability
Flexibility
Scalability
Program Design
Program design consists of the steps a programmer should do before they start coding the
program in a specific language. These steps when properly documented will make the
completed program easier for other programmers to maintain in the future. There are three
broad areas of activity:
Understanding the Program
Using Design Tools to Create a Model
Develop Test Data
Understanding the Program
If you are working on a project as one of many programmers, the system analyst may have
created a variety of documentation items that will help you understand what the program is to
do. These could include screen layouts, narrative descriptions, documentation showing the
processing steps, etc. If you are not on a project and you are creating a simple program you
might be given only a simple description of the purpose of the program. Understanding the
purpose of a program usually involves understanding its:
Inputs
Processing
Outputs
This IPO approach works very well for beginning programmers. Sometimes, it might help to
visualize the program running on the computer. You can imagine what the monitor will look like,
what the user must enter on the keyboard and what processing or manipulations will be done.
What is an algorithm?
Algorithm is a set of steps to complete a task.
For example,
Algorithm:
‘’a set of steps to accomplish or complete a task that is described precisely enough that a
Input: After designing an algorithm, the algorithm is given the necessary and
desired inputs.
Processing unit: The input will be passed to the processing unit, producing the
desired output.
Described precisely: very difficult for a machine to know how much water, milk to be added etc. in the
above tea making algorithm.
These algorithms run on computers or computational devices. For example, GPS in our smart phones,
Google hangouts.
GPS uses shortest path algorithm. Online shopping uses cryptography which uses RSA algorithm.
Characteristics of an algorithm:-
• Must take an input.
Performance
The real world is challenging to break down into smaller steps. If a problem
can be easily divided into smaller steps, it indicates that the problem is
feasible.
As you all know, basic code constructs such as loops like do, for, while,
all programming languages share flow control such as if-else, and so on.
An algorithm can be written using these common constructs.
Example
Problem: Create an algorithm that multiplies two numbers and displays the
output.
Step 1 − Start
Step 2 − declare three integers x, y & z
Step 6 − print z
Step 7 − Stop
Factors of an Algorithm
The following are the factors to consider when designing an algorithm:
Modularity: This feature was perfectly designed for the algorithm if you
are given a problem and break it down into small-small modules or small-
small steps, which is a basic definition of an algorithm.
Correctness: An algorithm's correctness is defined as when the given
inputs produce the desired output, indicating that the algorithm was
designed correctly. An algorithm's analysis has been completed correctly.
Maintainability: It means that the algorithm should be designed in a
straightforward, structured way so that when you redefine the algorithm,
no significant changes are made to the algorithm.
Functionality: It takes into account various logical steps to solve a real-
world problem.
Robustness: Robustness refers to an algorithm's ability to define your
problem clearly.
User-friendly: If the algorithm is difficult to understand, the designer will
not explain it to the programmer.
Simplicity: If an algorithm is simple, it is simple to understand.
Extensibility: Your algorithm should be extensible if another algorithm
designer or programmer wants to use it.
Types of Algorithms
There are two types of algorithms:
Search Algorithm
Sort Algorithm
Search Algorithm
Every day, you look for something in your daily life. Similarly, in the case of a
computer, a large amount of data is stored in the computer, and whenever a
user requests data, the computer searches for that data in the memory and
returns it to the user. There are primarily two methods for searching data in
an array:
Linear Search
Linear search is a simple algorithm that begins searching for an element or a
value at the beginning of an array and continues until the required element
is not found. It compares the element to be searched with all the elements in
an array; if a match is found, the element index is returned; otherwise, -1 is
returned. This algorithm can be applied to an unsorted list.
Binary Search
A binary algorithm is the most basic algorithm, and it searches for elements
very quickly. It is used to find an element in a sorted list. To implement the
binary algorithm, the elements must be stored in sequential order or sorted.
If the elements are stored randomly, binary search cannot be implemented.
Sort Algorithm
Sorting algorithms rearrange elements in an array or a given data
structure in ascending or descending order. The comparison operator
decides the new order of the elements.
Approaches of an Algorithm
Following consideration of both the theoretical and practical importance of
designing an algorithm, the following approaches were used:
Greedy Algorithm
This is an algorithm paradigm that makes the best choice possible on each
iteration in the hopes of choosing the best solution. It is simple to set up and
has a shorter execution time. However, there are very few cases where it is
the best solution.
Dynamic Programming
It improves the efficiency of the algorithm by storing intermediate results. It
goes through five steps to find the best solution to the problem:
2. After breaking down the problem into subproblems, it finds the best
solution from these subproblems.
4. Reuse the result to prevent it from being recomputed for the same
subproblems.
Complexity of an Algorithm
mul = 1;
// Suppose you have to calculate the
multiplication of n numbers.
for i=1 to n
mul = mul *1;
// when the loop ends, then mul holds
the multiplication of the n numbers
return mul;
The time complexity of the loop statement in the preceding code is at least
n, and as the value of n escalates, so does the time complexity. While the
code's complexity, i.e., returns mul, will be constant because its value is not
dependent on the importance of n and will provide the result in a single step.
The worst-time complexity is generally considered because it is the
maximum time required for any given input size.
Space Complexity
The amount of space an algorithm requires to solve a problem and produce
an output is called its space complexity. Space complexity, like time
complexity, is expressed in big O notation.
The space is required for an algorithm for the following reasons:
This notation provides an upper bound on a function which ensures that the function never grows faster
than the upper bound. So, it gives the least upper bound on a function so that the function never grows
faster than this upper bound.
It Is the formal way to express the upper boundary of an algorithm running time. It measures the worst
case of time complexity or the algorithm’s longest amount of time to complete its operation. It is
represented as shown below:
For example:
If f(n) and g(n) are the two functions defined for positive integers,
Then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there exists constants c and no
such that:
This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on the function f(n). In
this case, we are calculating the growth rate of the function which eventually calculates the worst time
complexity of a function, i.e., how worst an algorithm can perform.
F(n)<=c.g(n)
First, we will replace f(n) by 2n+3 and g(n) by n.
2*1+3<=5*1
5<=5
If n=2
2*2+3<=5*2
7<=10
We know that for any value of n, it will satisfy the above condition, i.e., 2n+3<=c.n. If the value of c is
equal to 5, then it will satisfy the condition 2n+3<=c.n. We can take any value of n starting from 1, it will
always satisfy. Therefore, we can say that for some constants c and for some constants n0, it will always
satisfy 2n+3<=c.n. As it is satisfying the above condition, so f(n) is big oh of g(n) or we can say that f(n)
grows linearly. Therefore, it concludes that c.g(n) is the upper bound of the f(n). It can be represented
graphically as:
Analysis of algorithm
In theoretical analysis of algorithms, it is common to estimate their complexity in the asymptotic
sense, i.e., to estimate the complexity function for arbitrarily large input. The term “analysis of
algorithms” was coined by Donald Knuth.
Usually, the efficiency or running time of an algorithm is stated as a function relating the input
length to the number of steps, known as time complexity, or volume of memory, known as space
complexity.
By considering an algorithm for a specific problem, we can begin to develop pattern recognition
so that similar types of problems can be solved by the help of this algorithm.
Algorithms are often quite different from one another, though the objective of these algorithms
are the same. For example, we know that a set of numbers can be sorted using different algorithms.
Number of comparisons performed by one algorithm may vary with others for the same input. Hence,
time complexity of those algorithms may differ. At the same time, we need to calculate the memory
space required by each algorithm.
Analysis of algorithm is the process of analyzing the problem-solving capability of the algorithm
in terms of the time and size required (the size of memory for storage while implementation). However,
the main concern of analysis of algorithms is the required time or performance. Generally, we perform
the following types of analysis –
Amortized – A sequence of operations applied to the input of size a averaged over time.
To solve a problem, we need to consider time as well as space complexity as the program may run on
a system where memory is limited but adequate space is available or may be vice-versa. In this context, if
we compare bubble sort and merge sort. Bubble sort does not require additional memory, but merge
sort requires additional space. Though time complexity of bubble sort is higher compared to merge sort,
we may need to apply bubble sort if the program needs to run in an environment, where memory is very
limited.
Rate of Growth
Rate of growth is defined as the rate at which the running time of the algorithm is increased when the
input size is increased.
The growth rate could be categorized into two types: linear and exponential. If the algorithm is increased
in a linear way with an increasing in input size, it is linear growth rate. And if the running time of the
algorithm is increased exponentially with the increase in input size, it is exponential growth rate.
Once an algorithm is designed to solve a problem, it becomes very important that the algorithm always
returns the desired output for every input given. So, there is a need to prove the correctness of an
algorithm designed.
Structured Approaches to designing an algorithm
In structured programming design, programs are broken into different functions these functions
are also known as modules, subprogram, subroutines and procedures.
Each function is design to do a specific task with its own data and logic. Information can be
passed from one function to another function through parameters. A function can have local data that
cannot be accessed outside the function’s scope. The result of this process is that all the other different
functions are synthesized in an another function. This function is known as main function. Many of the
high level languages supported structure programming.
Structured programming minimized the chances of the function affecting another. It supported
to write clearer programs. It made global variables to disappear and replaced by the local variables. Due
to this change one can save the memory allocation space occupied by the global variable. Its
organization helped to understand the programming logic easily. So that one can easily understand the
logic behind the programs. It also helps the new comers of any industrial technology company to
understand the programs created by their senior workers of the industry. It also made debugging easier.
Functional abstraction was introduced with structured programming. Abstraction simply means
that how able one can or we can say that it means the ability to look at something without knowing
about its inner details. In structured programming, it is important to know that a given function satisfies
its requirement and performs a specific task. Weather How that task is performed is not important.
A high level language has to be translated into the machine language by translator and thus a
price in computer time is paid.
The object code generated by a translator might be inefficient compared to an equivalent
assembly language program.
Data type are proceeds in many functions in a structured program. When changes occur in those
data types, the corresponding change must be made to every location that acts on those data
types within the program. This is really a very time consuming task if the program is very large.
Recursion
Recursion is the process which comes into existence when a function calls a copy of
itself to work on a smaller problem. Any function which calls itself is called recursive
function, and such function calls are called recursive calls. Recursion involves several
numbers of recursive calls. However, it is important to impose a termination condition of
recursion. Recursion code is shorter than iterative code however it is difficult to understand.
Recursion cannot be applied to all the problem, but it is more useful for the tasks that can
be defined in terms of similar subtasks. For Example, recursion may be applied to sorting,
searching, and traversal problems.
Generally, iterative solutions are more efficient than recursion since function call is
always overhead. Any problem that can be solved recursively, can also be solved iteratively.
However, some problems are best suited to be solved by the recursion, for example, tower of
Hanoi, Fibonacci series, factorial finding, etc.
Need of Recursion
Recursion is an amazing technique with the help of which we can reduce the
length of our code and make it easier to read and write. It has certain advantages over
the iteration technique which will be discussed later. A task that can be defined with
its similar subtask, recursion is one of the best solutions for it. For example; The
Factorial of a number.
Properties of Recursion:
Algorithm: Steps
The algorithmic steps for implementing recursion in a function are as follows:
Step1 - Define a base case: Identify the simplest case for which the solution is known or trivial. This is
the stopping condition for the recursion, as it prevents the function from infinitely calling itself.
Step2 - Define a recursive case: Define the problem in terms of smaller subproblems. Break the problem
down into smaller versions of itself, and call the function recursively to solve each subproblem.
Step3 - Ensure the recursion terminates: Make sure that the recursive function eventually reaches the base
case, and does not enter an infinite loop.
step4 - Combine the solutions: Combine the solutions of the subproblems to solve the original problem.
1. #include <stdio.h>
3. int main()
4. {
5. int n,f;
7. scanf("%d",&n);
8. f = fact(n);
9. printf("factorial = %d",f);
10. }
12. {
13. if (n==0)
14. {
15. return 0;
16. }
17. else if ( n == 1)
18. {
19. return 1;
20. }
21. else
22. {
24. }
25. }