PROGRAM-1
AIM:
To understand and analyse the methods for expressing and comparing the
complexity of algorithms by focusing on worst-case and average-case scenarios.
OBJECTIVES :
● To understand the importance of algorithmic complexity in
evalua ng computa onal performance.
● To dis nguish between worst-case and average-case complexi es.
● To apply these concepts to real-world algorithms and evaluate their
efficiency.
INTRODUCTION :
Defini on:
An algorithm is a finite, step-by-step sequence of instruc ons designed to solve a
specific problem or perform a task. It takes an input, processes it according to a set of
rules, and produces an output. Algorithms are used in computer science, mathema cs,
and everyday problem-solving.Algorithm analysis is a fundamental aspect of computer
science, providing insights into how algorithms behave as input sizes grow. Complexity
analysis helps es mate resource u liza on, par cularly me and space. Worst-case
complexity assesses the maximum resources needed in the most challenging
scenarios, while average-case complexity es mates the expected resource
consump on for typical inputs.
METHODOLOGY:
1. Studying Complexity Analysis -
● Gain an understanding of algorithm complexity and its role in
performance evalua on.
● Familiarize with standard complexity nota ons:
▪ Big-O (O): Upper bound represen ng worst-case performance.
▪ Theta (Θ): Tightly bound average-case performance.
▪ Omega (Ω): Lower bound represen ng best-case performance.
2. Analysing Worst-Case and Average-Case Complexi es -
● Evaluate worst-case complexity to iden fy the algorithm's maximum
me requirement.
● Study average-case complexity by considering typical input pa erns
and distribu ons.
5. Result Comparison -
● Record and compare execu on mes for different algorithms.
● Evaluate the impact of input size on worst-case and average-case
performance.
OBSERVATIONS :
● Algorithms with higher worst-case complexity may s ll perform efficiently
for smaller input sizes.
● Average-case analysis provides a be er understanding of expected
algorithm performance in typical scenarios.
● Recursive algorithms tend to have exponen al or polynomial complexity
depending on their design.
● Sor ng algorithms like Merge Sort are consistent in both worst and average
cases, while Quick Sort may degrade in specific situa ons.
CONCLUSION :
● Understanding algorithm complexity aids in selec ng efficient algorithms
for applica ons.
● Both worst-case and average-case analyses provide insights for op mizing
performance.
PROGRAM-2
AIM :
To understand the concept of algorithm complexity and asympto c nota on and
their significance in the analysis of algorithms.
OBJECTIVES :
● To understand the role of asympto c nota ons (Big-O, Theta, Omega) in
evalua ng algorithmic efficiency.
● To explore how me and space complexi es change as the size of input data
grows.
● To apply asympto c analysis to common algorithms and assess their
computa onal performance.
INTRODUCTION :
As algorithms are fundamental to compu ng, understanding their complexity is
crucial for performance evalua on, especially as input sizes increase. Asympto c
nota ons, such as Big-O, Theta, and Omega, help express the efficiency of
algorithms in a simplified manner, abstrac ng away the constant factors and
focusing on the rela onship between input size and resource usage ( me or
space). This analysis aids developers in making decisions about algorithm
selec on, ensuring scalable solu ons for a variety of use cases.
METHODOLOGY :
1. Understanding Asympto c Nota on -
Study the basic asympto c nota ons:
● Big-O (O): Represents the upper bound of an algorithm’s
performance (worst case).
● Theta (Θ): Represents both the upper and lower bounds ( ght
bound).
● Omega (Ω): Represents the lower bound (best case).
2.. Prac cal Implementa on:
● Implement various algorithms and collect data on their execu on
mes and space usage.
● Run experiments with different input sizes to observe the effect on
algorithm efficiency.
3.. Comparing Results:
● Compare the performance of different algorithms using the
asympto c nota ons.
● Analyse how well these algorithms perform under worst-case and
average case condi ons.
OBSERVATIONS :
● Big-O nota on is most commonly used to represent the worst-case
scenario, ensuring that the algorithm does not exceed a certain
resource limit.
● Theta nota on provides a more precise es ma on of an algorithm's
performance by defining both the upper and lower bounds.
● Omega nota on helps iden fy the best-case performance of an
algorithm.
● Asympto c analysis becomes more relevant for large input sizes,
where the difference between linear, logarithmic, and quadra c
complexi es becomes evident.
CONCLUSION :
● Asympto c analysis is essen al for understanding algorithm
efficiency, helping developers choose the most op mal solu on for
specific problems.
● By applying Big-O, Theta, and Omega nota ons, one can assess the
scalability and performance of algorithms, leading to be er
decision-making in algorithm selec on.
PROGRAM-3
AIM :
To implement and analyse the Linear Search and Binary Search algorithms in C,
comparing their me complexi es and efficiency.
OBJECTIVES :
● To understand the working principles of searching algorithms.
● To analyse the me complexity of both algorithms in best, worst, and
average cases.
● To compare the efficiency of both algorithms and highlight their strengths
and weaknesses in different scenarios.
INTRODUCTION :
Searching is a fundamental opera on in computer science that involves finding a
specific element in a collec on of data. Two commonly used searching techniques
are Linear Search and Binary Search. Let's discuss these both in detail below.
❖ LINEAR SEARCH
DEFINITION:
Linear Search is a simple search algorithm that checks each element in a collec on
(such as an array or list) one by one un l the desired element is found or the en re
collec on has been searched.
METHODOLOGY:
1. Start from the first element of the collec on.
2. Compare each element with the target value.
3. If a match is found, return the index of the element.
4. If no match is found a er checking all elements, return -1 or indicate that the
element is not present.
APPLICATIONS:
● Searching in small datasets where efficiency is not a concern.
● Finding an element in an unsorted array or linked list.
● Searching in real- me applica ons like name lookups in contact lists.
ALGORITHM:
1. Start from the first element (index = 0).
2. Repeat for each element in the array
3. If the en re array is traversed and no match is found, return -1.
C PROGRAM IMPLEMENTATION :
OUTPUT –
RECURRENCE RELATION :
For an array of size n, in the worst case, the algorithm may have to check all n
elements.
● T(n) = T(n-1) + O(1)
● This represents checking one element and then searching in the remaining
(n-1) elements.
COMPLEXITY ANALYSIS :
Case Time Complexity Explana on
Best Case O(1) When the element is found at the first
posi on.
Average Case O(n) When the element is found in the middle.
Worst Case O(n) When the element is at the last posi on
or not present.
CONCLUSION:
Linear Search is a simple and easy-to-implement algorithm ideal for small or
unsorted datasets. While inefficient for large datasets, its simplicity makes it
useful for tasks like checking element existence or finding the first occurrence of
a value.
❖ BINARY SEARCH
DEFINITION :
Binary Search is an efficient searching algorithm used for finding an element in a
sorted array. It follows the divide and conquer approach by repeatedly dividing the
search space in half un l the target element is found.
METHODOLOGY :
1. Sort the array
2. Ini alize two pointers –
● low → star ng index (0)
● high → ending index (n-1)
3. Find the midpoint -
● mid = (low + high) / 2
4. Compare the midpoint element with the target –
● If arr[mid] == key, return the index (element found).
● If arr[mid] > key, search in the le half (high = mid - 1).
● If arr[mid] < key, search in the right half (low = mid + 1).
5. Repeat steps 3-4 un l low > high.
6. If the element is not found, return -1.
APPLICATIONS :
● Searching in large sorted datasets (e.g., database records, sorted arrays). ●
Finding elements in dic onary-based applica ons (e.g., word lookup).
● Used in algorithmic problems like finding square roots, searching in log files,
and debugging.
ALGORITHM:
1. Ini alize low = 0, high = n - 1.
2. Repeat while low ≤ high:
● Compute mid = (low + high) / 2.
● If arr[mid] == key, return mid.
● If arr[mid] > key, update high = mid - 1. ● If arr[mid] < key, update
low = mid + 1.
3. If the loop exits, return -1 (element not found).
C PROGRAM IMPLEMENTATION :
OUTPUT :
RECURRENCE RELATION :
● Binary Search repeatedly divides the problem size by 2.
● T(n) = T(n/2) + O(1)
● By solving this recurrence using the Master Theorem, we get me
Complexity as O(log n).
COMPLEXITY ANALYSIS :
Case Time Complexity Explana on
Best Case O(1) The element is found at the middle in
the first step.
Average Case O(log n) The search space reduces by half in each
step.
Worst Case O(log n) The element is found at the last
comparison or not found at all.
CONCLUSION –
● Binary Search is much more efficient than Linear Search, especially for large
datasets.
● It requires a sorted array, making sor ng a prerequisite before searching.
● With O(log n) complexity, it performs significantly be er than O(n) algorithms.
PROGRAM-4
AIM
To implement and analyse the Quick Sort and Merge sort algorithms in C,
comparing their me complexi es and efficiency.
❖ QUICK SORT
DEFINITION:
Quick Sort is a divide-and-conquer sor ng algorithm that works by selec ng a pivot
element and par oning the array such that all elements smaller than the pivot
are on the le and all larger elements are on the right. It then recursively sorts
both halves.
METHODOLOGY :
● Choose a pivot element (first, last, random, or median). ● Par on the
array:
a) Move elements smaller than the pivot to the le .
b) Move elements greater than the pivot to the right.
● Recursively apply Quick Sort to the le and right subarrays.
● The array is sorted when all par ons have only one element.
APPLICATIONS :
● Efficient for large datasets due to its average-case O(n log n) me
complexity.
● Used in database sor ng for indexing and retrieval.
● Applied in search algorithms that require par oning (e.g., QuickSelect).
● Used in programming libraries like Python’s sorted() and Java’s Arrays.sort().
ALGORITHM :
1. Select a pivot element.
2. Par on the array into two subarrays:
a) Elements smaller than the pivot.
b) Elements greater than the pivot.
3. Recursively apply Quick Sort to the le and right subarrays.
4. Combine the sorted subarrays to get the final sorted list.
PSEUDOCODE :
QUICKSORT (arr, low, high):
If (low < high):
Par on the array and find the pivot index.
Recursively call QUICKSORT(arr, low, pivot - 1).
Recursively call QUICKSORT(arr, pivot + 1, high).
C PROGRAM IMPLEMENTATION :
OUTPUT:
RECURRENCE RELATION:
T(n) = T(k) + T(n-k-1) + O(n) where k is the par on index.
COMPLEXITY ANALYSIS :
Case Time Complexity Explana on
Best Case O(n log n) When par oning is balanced.
Average Case O(n log n) Expected case for random inputs.
Worst Case O(n2) When the smallest or largest
element is always chosen as the
pivot.
CONCLUSION –
● Quick Sort is an efficient sor ng algorithm suitable for large datasets.
● It is in-place and faster than Merge Sort for prac cal scenarios.
● The choice of pivot affects performance (Random Pivot helps avoid
worst-case).
❖ MERGE SORT
DEFINITION :
Merge Sort is a divide-and-conquer sor ng algorithm that splits an array into two
halves, recursively sorts them, and then merges them into a sorted sequence.
METHODOLOGY:-
1. Divide the array into two halves.
2. Recursively sort both halves.
3. Merge the two sorted halves into one sorted array.
APPLICATIONS :
● Used in external sor ng where data is too large for RAM.
● Applied in Linked Lists sor ng due to its stable nature.
● Used in parallel compu ng due to its recursive structure.
● Preferred in financial and scien fic compu ng where stability is important.
ALGORITHM :
1. Divide the array into two equal halves.
2. Recursively sort the le half.
3. Recursively sort the right half.
4. Merge both sorted halves into a single sorted array.
PSEUDOCODE :
MERGE_SORT(arr, le , right):
If le < right:
Find the middle index.
Recursively call MERGE_SORT on le half.
Recursively call MERGE_SORT on right half.
Merge the two sorted halves.
C PROGRAM IMPLEMENTATION –
OUTPUT –
RECURRENCE RELATION :
T(n) = 2T(n/2) + O(n)
COMPLEXITY ANALYSIS
Case Time Complexity Explana on
Best Case O(n log n) Always divides the array into equal halves.
Average Case O(n log n) Recursively splits and merges the array.
Worst Case O(n log n) Always requires merging.
CONCLUSION :
Basis For Comparison Quick Sort Merge Sort
Par oning of the The spli ng of a list Array is always
elements in the array of elements is not divided into half
necessarily divided (n/2).
into half.
Worst case complexity O(n2) O(n log n)
Works well on Smaller array Operates fine in any
type of array.
Speed Faster than other Consistent speed in
sor ng algorithms for all type of data sets.
small data set.
Addi onal storage Less More
space requirement
Efficiency Inefficient for larger More efficient.
arrays.
Sor ng method Internal External
PROGRAM-5
AIM :
To implement and analyse Kruskal’s Algorithm for finding the Minimum Cost
Spanning Tree (MCST) of a graph using the Greedy method.
OBJECTIVES :
● To understand the concept of Minimum Spanning Tree (MST) and its
applica ons.
● To implement Kruskal’s Algorithm using the Greedy approach.
● To analyse the efficiency of Kruskal’s Algorithm in construc ng the
Minimum Cost Spanning Tree (MCST).
INTRODUCTION:
DEFINITION :
Kruskal’s Algorithm is a Greedy algorithm used to find the Minimum Spanning
Tree (MST) of a weighted, connected, and undirected graph. It selects edges in
increasing order of weight while avoiding cycles to connect all ver ces with
the minimum possible cost.
METHODOLOGY :
1. Sort all edges in the graph in ascending order of their weights.
2. Ini alize an empty set to store the MST edges.
3. For each edge in the sorted list:
● If adding the edge does not form a cycle, include it in the MST.
● Use the Union-Find (Disjoint Set) data structure to check and
merge components.
4. Repeat un l (V-1) edges are added to the MST, where V is the
number of ver ces. So, the resul ng edges form the Minimum Cost
Spanning Tree
APPLICATIONS :
● Network Design: Used in computer networks, telecommunica on, and
road networks to minimize cost.
● Electrical Grid Systems: Finding the most cost-effec ve way to connect
power sta ons.
● Clustering in Machine Learning: Used in hierarchical clustering
techniques.
● Image Processing: Applied in segmenta on and pa ern recogni on.
ALGORITHM :
1. Sort all edges of the graph in increasing order of weight.
2. Ini alize an empty set for the Minimum Spanning
Tree (MST). 3. For each edge (u, v) in the sorted list:
● a) If including (u, v) does not form a cycle, add it to MST.
● b) Use the Disjoint Set (Union-Find) to check cycle forma on.
4. Repeat un l the MST contains (V-1) edges.
5. Return the MST with the minimum total weight.
PSEUDOCODE :
KRUSKAL(Graph):
Sort all edges in increasing order of weight.
Ini alize empty MST and a disjoint-set data structure.
For each edge (u, v) in sorted edges:
If u and v belong to different components:
Add (u, v) to MST.
Merge the components of u and v.
Return MST with minimum cost.
C PROGRAM IMPLEMENTATION :
OUTPUT :
RECURRENCE RELATION –
Since Kruskal’s Algorithm sorts edges and performs Union-Find opera ons:
● Sor ng edges takes O(E log E).
● Union-Find opera ons take nearly O(E log V) (using path
compression).
● Overall complexity is O(E log E) ≈ O(E log V).
COMPLEXITY ANALYSIS
Case Time Complexity Explana on
Best Case O(E log V) When edges are already sorted.
Average Case O(E log V) Sor ng + Union-Find opera ons.
Worst Case O(E log V) When sor ng is required and
Union-Find runs fully.
CONCLUSION
● Kruskal’s Algorithm efficiently finds the Minimum Cost Spanning Tree using
a Greedy approach.
● It is suitable for sparse graphs (graphs with fewer edges).
● It is widely used in network op miza on problems.