Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
12 views7 pages

Data Structure

The document discusses algorithm complexity, focusing on time and space complexities, and provides examples of various complexities using Big O notation. It also explains data structures such as queues and linked lists, detailing their operations, types, advantages, and limitations. Additionally, it describes the merge sort algorithm, its steps, time and space complexities, and applications.

Uploaded by

sethinupoor931
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views7 pages

Data Structure

The document discusses algorithm complexity, focusing on time and space complexities, and provides examples of various complexities using Big O notation. It also explains data structures such as queues and linked lists, detailing their operations, types, advantages, and limitations. Additionally, it describes the merge sort algorithm, its steps, time and space complexities, and applications.

Uploaded by

sethinupoor931
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Name - Nupoor sethi

Roll No. - 2414510515


Course code & name- DCA 1207 Data structure
Session - Feb - March 2025
Program - Bachelors of Computer application (BCA)
Semester - II

Set–1

Answer 1 . The complexity of an algorithm refers to the computational resources that an


algorithm needs to execute and solve a problem, with time and space being the two main
ones. It hence serves as a method of measuring the performance and efficiency of algorithms,
especially when dealing with huge datasets or real/life-time systems.

Time Complexity of an algorithm quantifies the time taken by the algorithm to finish,
depending on the input size of the algorithm, usually denoted by 'n' above. Whereas for an
understanding of the scalability and growth of the algorithm, it is more convenient to use the
time complexity. It is usually described using the Big 'O' notion, which characterizes the
worst-case scenario for the time an algorithm might take. Some examples of time complexity
were:

O(1) - Constant time: The time taken is independent of the number of inputs. Example:
Accessing any element in an array.

O(n) - Linear time: Time increases linearly with inputs. Example: Linear searching in an
unsorted array.

O(log n) - Logarithmic time: Time increases logarithmically. Example: Binary searching in a


sorted array.

O(n^2) - Quadratic time: Common time for nested loop algorithms such as bubble sort.

Example: Consider Linear search and Binary search. In Linear search, the algorithm checks
for the presence of an item in the list by going through each element.In a linear search, one
element in the array is tested at a time, until the desired value is found, thus taking about O(n)
time. In contrast, binary search requires the array to be sorted and at each step, the algorithm
divides the possible range in half and proceeds with that half, resulting in O(log n) time
complexity.

Space Complexity is the amount of memory space required by an algorithm with respect to
the input size for any activity that it needs to perform. The space needed for variables, data
storage, temporary space for function calls, etc. Space complexity also even uses Big O
Notation to express this. For example,
O(1): An in-place algorithm such as swapping two variables using constant space.
O(n): A program creates an auxiliary array of size 'n' and runs with a linear amount of space.

Example: Merge Sort requires O(n) space to hold extra arrays for merging, whereas Quick
sort requires only O(log n) space due to the space used by recursive calls and no extra storage
for arrays.

Understanding algorithm complexity becomes vital so that we may select the fastest and most
resource-efficient algorithm available for an application where speed is critical (financial
trading platforms, AI algorithms, real-time monitoring systems).

Answer 2. Here is the step-by-step algorithm for searching a number in an array and
replacing it with another value:

Algorithm:

1. Start

2. Input size of array ‘n’

3. Declare an array of size ‘n’

4. Input all elements of the array

5. Input the number `x` to be searched

6. Input the number `y` to replace `x` with

7. Traverse array from 0 to n-1: a. If array[i] == x i. Replace array[i] = y

8. Print the modified array

9. End
Example: Suppose the original array is: [3, 7, 9, 7, 2]

Element to be searched: 7

Replacement value: 99

Output array: [3, 99, 9, 99, 2]

Time complexity: O(n), as the algorithm has to traverse the whole array. Space complexity:
O(1), as the operation is performed in-place.

The algorithm is easy and fast to implement for small-to-medium datasets, thereby laying the
foundation for more complex data manipulation operations.

Answer 3. A Queue is a linear data structure that works on the principle of FIFO (First In
First Out). This implies that the element inserted first into the queue is also the first to be
removed. This situation is like a real-life queue, such as a line of people waiting to buy movie
tickets.

Queue Basic Operations:

1. Enqueue-Eliminate the rear end of the queue.

2. Dequeue-Remove an element from the front of the queue.

3. Peek/Front-View the element at the front but don't remove it.

4. IsEmpty-Checks whether the queue is empty.

5. IsFull-Checks if the queue is full (for fixed-size queues).

Types of Queues:

Simple Queue: Insertion from rear and deletion from front.

Circular Queue: The last position in the queue is connected back to the first to create a
circular arrangement.

Priority Queue: Elements are processed not based on order but priority.

Double Ended Queue (Deque): Insertion and deletion could be done from both ends.
Queue Applications:

Operating System: CPU scheduling (round robin), disk scheduling, and job scheduling.

Printers: Print jobs are maintained in print queues.

Web Servers: Handling Multiple Client Requests.

Data Buffers: Queues are used in buffering of streaming data

Breadth-First Search (BFS): In graphs and trees, queues are essential for level-order traversal.

In summary, queues are foundational structures used in both software systems and hardware
applications for managing ordered data flows.

Set– 2

Answer 4. A linked list is a dynamic data structure where elements, called nodes, are
connected by pointers. Unlike arrays, the elements are not stored in adjacent memory
locations. Each node in a linked list has two parts:

Data: The actual data stored.

Pointer: A reference to the next node, or both the previous and next nodes in some cases.

Types of Linked Lists:

1. Singly Linked List: Each node has one pointer that points to the next node. The last node
points to NULL.

2. Doubly Linked List: Each node has two pointers: one that points to the next node and
another that points to the previous node. This allows for two-way traversal.

3. Circular Linked List: The last node connects back to the first node, forming a circle. It can
be singly or doubly linked.

Advantages Over Arrays:


Dynamic Size: Linked lists can grow or shrink at runtime, which helps prevent memory
waste.

Efficient Insertions/Deletions: Adding or removing a node does not require shifting elements
like in arrays. Insertions and deletions are faster, especially in the middle or at the beginning
of the list, with a time complexity of O(1) using pointers.

Memory Use: Arrays can waste memory if the allocated size is not fully used. Linked lists
only allocate memory when necessary.

Ease of Implementation for Abstract Data Types: It is simple to implement stacks, queues,
and other data structures using linked lists.

Limitations:

Random Access is Not Possible: Unlike arrays, linked lists do not allow direct access by
index. You must go through the nodes one by one.

Extra Memory: Each node needs additional space for the pointer field.

Despite some drawbacks, linked lists are great when the number of data elements is unknown
ahead of time or when frequent insertions and deletions are required.

Answer 5. A doubly circular queue is a variation of the traditional queue. In this setup, the
last node connects back to the first node. Each node has two pointers: one points to the next
node and the other points to the previous node. This design combines the benefits of circular
and doubly linked lists. It allows for operations from both ends with wrap-around capability.

Features:

- Allows bidirectional traversal


- Efficient insertion and deletion at both front and rear ends
- Useful in real-time applications requiring cyclic traversal, such as playlist looping and
real-time simulation

Algorithm to Display the Contents of a Doubly Circular Queue:

1. Start

2. Check if the queue is empty (if front is NULL).

If true, print "Queue is empty" and exit.

3. Initialize a temporary pointer, temp, to front.


4. Do

Print temp->data.

Move temp to temp->next.

5. While temp is not equal to front.

6. End.

This algorithm ensures that every node prints exactly once by cycling through the circular
structure. Since each node links back to the start, we use a do-while loop to manage the
circular nature.

Applications:

- Media players
- Browser history navigation
- Job scheduling
- Real-time simulations

Answer 6. Merge Sort is a strong sorting algorithm that uses the divide-and-conquer method.
It breaks the input array into smaller subarrays, sorts them, and then merges the sorted
subarrays to create the final sorted array.

Steps in Merge Sort:

1. If the array has one or no element, return; it is already sorted.


2. Split the array into two halves.
3. Sort both halves using merge sort.
4. Merge the two sorted halves into one sorted array.

Merge Function:

Compare elements from the left and right subarrays.


Choose the smaller of the two and place it in the result array.
Repeat until you have used all elements from both subarrays.

Divide-and-Conquer Explanation:

Divide: Split the array into two halves until each subarray has a single element.
Conquer: Sort each subarray recursively.
Combine: Merge the sorted subarrays into one sorted array.
Example: Array = [38, 27, 43, 3, 9, 82, 10]

Divide: [38, 27, 43] and [3, 9, 82, 10]


Sort: [27, 38, 43] and [3, 9, 10, 82]
Merge: [3, 9, 10, 27, 38, 43, 82]

Time Complexity:

Best, average, and worst-case time complexity is O(n log n).


Space Complexity is O(n) because of auxiliary arrays used during merging.

Advantages:

Stable sort: keeps the relative order of equal elements.


Consistent time complexity.

Merge Sort is commonly used in applications that need stable sorting. It works well with
linked lists and in external sorting scenarios where memory is limited.

You might also like