Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views6 pages

Divide and Conquer

The document explains the Divide and Conquer technique, specifically focusing on the MergeSort algorithm, which sorts an array by recursively dividing it into subarrays until reaching a base case, and then merging the sorted subarrays. It details the steps involved in the MergeSort function and the merge procedure, highlighting the time complexity of O(n log n) and space complexity of O(n). Additionally, it discusses methods for analyzing the performance of the algorithm, including the substitution method, recursion-tree method, and master method.

Uploaded by

krsrohit12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views6 pages

Divide and Conquer

The document explains the Divide and Conquer technique, specifically focusing on the MergeSort algorithm, which sorts an array by recursively dividing it into subarrays until reaching a base case, and then merging the sorted subarrays. It details the steps involved in the MergeSort function and the merge procedure, highlighting the time complexity of O(n log n) and space complexity of O(n). Additionally, it discusses methods for analyzing the performance of the algorithm, including the substitution method, recursion-tree method, and master method.

Uploaded by

krsrohit12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Divide and conquer

Using the Divide and Conquer technique, we divide a problem into subproblems. When
the solution to each subproblem is ready, we 'combine' the results from the
subproblems to solve the main problem.

Suppose we had to sort an array A. A subproblem would be to sort a sub-section of this


array starting at index p and ending at index r, denoted as A[p..r].
Divide

If q is the half-way point between p and r, then we can split the subarray A[p..r] into two
arrays A[p..q] and A[q+1, r].

Conquer

In the conquer step, we try to sort both the subarrays A[p..q] and A[q+1, r]. If we haven't
yet reached the base case, we again divide both these subarrays and try to sort them.

Combine

When the conquer step reaches the base step and we get two sorted subarrays A[p..q]
and A[q+1, r] for array A[p..r], we combine the results by creating a sorted array A[p..r]
from two sorted subarrays A[p..q] and A[q+1, r]

The MergeSort function repeatedly divides the array into two halves until we reach a
stage where we try to perform MergeSort on a subarray of size 1 i.e. p == r.

After that, the merge function comes into play and combines the sorted arrays into
larger arrays until the whole array is merged.
MERGE-SORT(A, p, r)
1 if p < r // subarray has at most one element and is therefore already sorted
2 q = [(p + r) /2] // divide step simply computes an index q into 2 subarrays
3 MERGE-SORT(A, p, q) //initial call MERGE-SORT(A, 1, A.length)
4 MERGE-SORT(A, q + 1, r) //calling 2 subarrays
5 MERGE(A, p, q, r) //merge procedure

To sort an entire array, we need to call MergeSort(A, 0, length(A)-1).

As shown in the image below, the merge sort algorithm recursively divides the array into
halves until we reach the base case of array with 1 element. After that, the merge
function picks up the sorted sub-arrays and merges them to gradually sort the entire
array.
MERGE PROCEDURE

A noticeable difference between the merging step we described above and the one we
use for merge sort is that we only perform the merge function on consecutive sub-
arrays.

This is why we only need the array, the first position, the last index of the first
subarray(we can calculate the first index of second subarray) and the last index of
second subarray.

Our task is to merge two subarrays A[p..q] and A[q+1..r] to create a sorted
array A[p..r]. So the inputs to the function are A, p, q and r
The merge function works as follows:

1. Create copies of the subarrays L ← A[p..q] and M ← A[q+1..r].


2. Create three pointers i,j and k
a. i maintains current index of L, starting at 1
b. j maintains current index of M, starting at 1
c. k maintains current index of A[p..q], starting at p
3. Until we reach the end of either L or M, pick the larger among the elements
from L and M and place them in the correct position at A[p..q]
4. When we run out of elements in either L or M, pick up the remaining elements and put
in A[p..q]

MERGE(A, p, q, r)
1 n1 = q - p + 1 // computes the length n1 of the subarray A[p . . q]
2 n2 = r – q //// computes the length n2 of the subarray A[q+1 . . r]
3 let L[1.. n1 + 1] and R[1.. n2 + 1] be new arrays //extra position in each array will
hold the
sentinel
4 for i = 1 to n1 // lines 4–5 copies the subarray A[p . . q] into L[1. . n1]
5 L[i] = A[p + i – 1]
6 for j = 1 to n2 // lines 6–7 copies the subarray A[q+1 . . r] into R[1. . n2]
7 R[j] = A[q + j]
8 L[n1 + 1] = ∞ // put sentinels at the end of the array L
9 R[n2 + 1] = ∞ // put sentinels at the end of the array R
10 i = 1
11 j = 1
12 for k = p to r // the subarray A[p . . k – 1] contains the k - p smallest elements of
L[1.. n1 + 1] and
R[1 .. n2 + 1] in sorted order and each iteration of the loop maintains invariant
13 if L[i] <= R[j] //both are smallest elements array &cannot be copied back into
A
14 A[k] = L[i]
15 i=i+1
16 else A[k] = R[j]
17 j=j+1
MERGE procedure runs in θ(n) time, where n = r - p + 1, observe that each of lines 1–

for loops of lines 4–7 take θ(n1 + n2) = θ(n) time,


3 and 8–11 takes constant time

there are n iterations of the for loop of lines 12–17, each of which takes constant
time

T (n) be the running time on a problem of size n.

time as θ(1)
If problem size is small say n <= c for some constant c, the solution takes constant

Merge sort a and b takes T(n/b) and aT(n/b)


D(n) time to divide the problem into subproblems
C(n) combine solutions to subproblem

T(n) =
Recursion tree method
Time and Space performance of our solution:
Here we have divided the array into two parts left and right array, we divide a number into half
in step which is (log n). Single step operation of sub array is O(1). The recursive operation of
sorting left array and right array, where the no of repeated step is (log n + 1). So the complexity
of it is O(log n). Then we have merge the subarrays of ‘n’ elements for the merge() function the
complexity is O(n). So the total time is n(log n +1) where the Time complexity of merge sort is
derived as O(n log n)
Space complexity of an algorithm is total space taken by the algorithm with respect to the input
size. It includes both Auxiliary space (which is temp space used by the algorithm) and space used
by the input. The Space complexity of merge sort algorithm is O(n).

In the substitution method, we guess a bound and then use mathematical induction to prove our
guess correct.
The recursion-tree method converts the recurrence into a tree whose nodes represent the costs
incurred at various levels of the recursion. We use techniques for bounding summations to solve
the recurrence.
The master method provides bounds for recurrences of the form
T (n) = aT(n/b) + f(n) where a >= 1, b > 1, and f (n) is a given function

You might also like