Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
35 views4 pages

DandC, Mergesort

The document discusses the Divide and Conquer and Decrease and Conquer algorithms, detailing their principles and examples such as Merge Sort and Quick Sort. It explains the general plan of Divide and Conquer, including the recursive approach and the Master Theorem for analyzing time complexity. Additionally, it provides a detailed explanation of Merge Sort, its algorithm, efficiency, and the advantages and disadvantages of using it.

Uploaded by

hima bindu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views4 pages

DandC, Mergesort

The document discusses the Divide and Conquer and Decrease and Conquer algorithms, detailing their principles and examples such as Merge Sort and Quick Sort. It explains the general plan of Divide and Conquer, including the recursive approach and the Master Theorem for analyzing time complexity. Additionally, it provides a detailed explanation of Merge Sort, its algorithm, efficiency, and the advantages and disadvantages of using it.

Uploaded by

hima bindu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Unit – II

Divide and Conquer: Merge Sort, Quick Sort, Multiplication of Long Integers, Strassen’s
Matrix Multiplication.
Decrease and Conquer: Insertion Sort, Depth First Search, Breadth First Search, Topological
Searching, Application of DFS and BFS.

2.1 Divide and Conquer:


Divide-and-conquer algorithms work according to the following general plan:
1. A problem is divided into several subproblems of the same type, ideally of about equal
size.
2. The subproblems are solved (typically recursively, though sometimes a different algorithm
is employed, especially when subproblems become small enough).
3. If necessary, the solutions to the subproblems are combined to get a solution to the original
problem.
The divide-and-conquer technique is shown in the following diagram, which depicts the case
of dividing a problem into two smaller subproblems.

Control abstraction for Divide and Conquer


Algorithm DandC(P)
if Small(P) then return S(P);
else
Divide P into smaller instances P1, P2, . . . Pk, k ≥ 1;
Apply DandC to each of these subproblems;
return Combine(DandC(P1), DandC(P2), . . . , DandC(Pk));

}
Here P is the problem to be solved.
Small(P) is a Boolean function that determines whether the input size is small enough that the
answer can be computed without splitting.
If Small(P) is true then the function S is invoked.

More generally, an instance of size n can be divided into b instances of size n/b, with a of
them needing to be solved. (Here, a and b are constants; a ≥ 1 and b > 1.) Assuming that size
n is a power of b to simplify our analysis, we get the following recurrence for the running
time T (n):
T (n) = aT (n/b) + f (n) (1)
where f (n) is a function that accounts for the time spent on dividing an instance of size n into
instances of size n/b and combining their solutions. Recurrence (1) is called the general
divide-and-conquer recurrence. The order of growth of its solution T (n) depends on the
values of the constants a and b and the order of growth of the function f (n).
Master Theorem If f (n) ∈ θ(nd) where d ≥ 0 in recurrence (1), then

𝜃(𝑛𝑑 ) 𝑖𝑓 𝑎 < 𝑏 𝑑
𝑇(𝑛) ∈ { 𝜃(𝑛𝑑 𝑙𝑜𝑔𝑛) 𝑖𝑓 𝑎 = 𝑏 𝑑
𝜃(𝑛log𝑏 𝑎 ) 𝑖𝑓 𝑎 > 𝑏 𝑑

Analogous results hold for the O and Ω notations, too.


For example, the recurrence for the number of additions A(n) made by the divide-and-
conquer sum-computation algorithm on inputs of size n = 2k is
A(n) = 2A(n/2) + 1.
Thus, for this example, a = 2, b = 2, and d = 0; hence, since a >bd,
A(n) ∈ θ (nlogb a) = θ(nlog2 2) = θ(n).

2.2 Mergesort
Mergesort sorts a given array A[0..n − 1] by dividing it into two halves
A[0.. ⌊n/2⌋ − 1] and A[⌊n/2⌋..n − 1], sorting each of them recursively, and then
merging the two smaller sorted arrays into a single sorted one.
ALGORITHM Mergesort(A[0..n − 1])
if n > 1
copy A[0.. ⌊n/2⌋ − 1] to B[0.. ⌊n/2⌋ − 1]
copy A[⌊n/2⌋..n − 1] to C[0.. ⌈ n/2⌉ − 1]
Mergesort(B[0.. ⌊n/2⌋ − 1])
Mergesort(C[0.. ⌈ n/2⌉− 1])
Merge(B, C, A)

ALGORITHM Merge(B[0..p − 1], C[0..q − 1], A[0..p + q − 1])


i ←0; j ←0; k←0
while i <p and j <q do
if B[i]≤ C[j ]
A[k]←B[i]; i ←i + 1
else A[k]←C[j ]; j ←j + 1
k←k + 1
if i = p
copy C[j..q − 1] to A[k..p + q − 1]
else copy B[i..p − 1] to A[k..p + q − 1]

The operation of the algorithm on the list 8, 3, 2, 9, 7, 1, 5, 4 is illustrated below

Efficiency of MergeSort
Time Complexity: MergeSort has a time complexity of (nlogn), both in the worst case and
the average case.
Worst-Case Comparisons: During merging, at each step, exactly one comparison is made.
In the worst case, the total number of comparisons for merging two arrays of size n/2 each is
n−1. The recurrence relation for the number of comparisons in the worst case is:
C(n)=2C(n/2)+(n−1)
When n=1, C(1)=0.
Using the Master Theorem, we get Cworst(n) ∈ O(nlogn)
Exact solution for n=2k
Cworst(n)=nlog2n−n+1
Average Case
The average number of comparisons is about 0.25n less than the worst case, which still puts
the time complexity at O(nlogn).
Advantages and Disadvantages
Advantages:
1. Stability: MergeSort is a stable sorting algorithm, meaning it preserves the relative
order of equal elements.
2. Predictable Performance: Consistently performs in O(nlogn) time regardless of the
input distribution.
Disadvantages:
1. Extra Storage: Requires O(n) additional storage space, making it less space-efficient
compared to algorithms like QuickSort and HeapSort.
2. Complex In-Place Merge: Although merging can be done in-place, it is quite
complex and not practical for general use.

You might also like