Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views32 pages

UNIT 2 - Decrease and Conquer

it is the 2nd unit of DAA

Uploaded by

musicalabhi5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views32 pages

UNIT 2 - Decrease and Conquer

it is the 2nd unit of DAA

Uploaded by

musicalabhi5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT 2

Decrease-and-Conquer
Decrease-and-Conquer
 Reduce problem instance to smaller instance of the
same problem
 Solve smaller instance
 Extend solution of smaller instance to obtain
solution to original instance
 Can be implemented either top-down or bottom-up
 Also referred to as inductive or incremental
approach
 The bottom-up variation is usually implemented
iteratively, starting with a solution to the smallest
instance of the problem.
3 Types of Decrease and Conquer
 Decrease by a constant (usually by 1):
• insertion sort
• topological sorting
• algorithms for generating permutations, subsets

 Decrease by a constant factor (usually by half)


• binary search and bisection method
• exponentiation by squaring

 Variable-size decrease
• Computing a Median and the Selection Problem
• Searching and Inserting in BST
• Euclid’s Algorithm to find GCD of two integers

This usually results in a recursive algorithm.


Example: Computing an
 Decrease by a Constant:
Example: Computing an
 Decrease by a constant factor:

 Variable Size Decrease


Example:
What’s the difference?
Consider the problem of exponentiation: Compute xn

 Brute Force: n-1 multiplications

 Divide and conquer: T(n) = 2*T(n/2) + 1


= n-1
 Decrease by one: T(n) = T(n-1) + 1 = n-1

 Decrease by constant factor: T(n) = T(n/a) + a-1


= (a-1) log a n
= log 2 n when a = 2
Insertion Sort
To sort array A[0..n-1], sort A[0..n-2] recursively and
then insert A[n-1] in its proper place among the
sorted A[0..n-2]

 Usually implemented bottom up (nonrecursively)

Example: Sort 6, 4, 1, 8, 5

Pass 1: 6|4 1 8 5
Pass 2: 4 6|1 8 5
Pass 3: 1 4 6|8 5
Pass 4: 1 4 6 8|5
Pass 5: 1 4 5 6 8
Pseudocode of Insertion Sort
Analysis of Insertion Sort
 Worst case Time efficiency
Cworst(n) = n(n-1)/2  Θ(n2)
- the instruction A[j]>v is executed the largest number
of times. for the worst-case input, we get A[0]>A[1] (for i=1),
A[1]>A[2] (for i=2),...,A[n−2]>A[n−1] (for i=n−1).
In other words, the worst-case input is an array of strictly
decreasing values. The number of key comparisons for such an
input is
Analysis of Insertion Sort
 Average case Time efficiency
Cavg(n) ≈ n2/4  Θ(n2)

In randomly ordered arrays, insertion sort makes on


an average half as many comparisons as on decreasing
arrays, i.e.,
Analysis of Insertion Sort
 Best case Time efficiency
Cbest(n) = n - 1  Θ(n) (also fast on almost sorted arrays)
In the best case, the comparison A[j]>v is executed only
once on every iteration of the outer loop. It happens if
and only if A[i−1]≤A[i] for every i=1,...,n−1, i.e., if the
input array is already sorted in nondecreasing order

 Space efficiency: in-place


Digraph and DFS forest of the digraph for the
DFS traversal started at a

 depth-first search forest exhibits all four types of edges


possible in a DFS forest of a directed graph: tree edges
(ab, bc, de), back edges(ba) from vertices to their
ancestors, forward edges (ac) from vertices to their
descendants in the tree other than their children, and cross
edges(dc).
DAGs and Topological Sorting
A dag: a directed acyclic graph, i.e. a directed graph
with no (directed) cycles
a b a b
a dag not a dag

c d c d

In modeling many problems that involve prerequisite


constraints (construction projects, document version control)

Vertices of a dag can be linearly ordered so that for every edge its
starting vertex is listed before its ending vertex (topological
sorting). Being a dag is also a necessary condition for topological
sorting to be possible.
Topological Sorting Example
Order the following items in a food chain

tiger
Wheat->sheep->human
Tiger
Plankton->shrimp->fish->human
human
fish
sheep
shrimp

plankton wheat
Topological order: DFS based Algorithm

Topological order: Source Removal Algorithm


DFS-based Algorithm
DFS-based algorithm for topological sorting
• Perform DFS traversal, noting the order vertices are
popped off the traversal stack
• Reverse order solves topological sorting problem
• Back edges encountered?→ NOT a dag!

Example:

b a c d

e f g h
Efficiency: The same as that of DFS.
Source Removal Algorithm
Source removal algorithm
Repeatedly identify and remove a source (a vertex with no incoming
edges) and all the edges incident to it until either no vertex is left or
there is no source among the remaining vertices (not a dag)
Example:
a b c d

e f g h

Efficiency: same as efficiency of the DFS-based algorithm, but how would you
identify a source? How do you remove a source from the dag?

“Invert” the adjacency lists for each vertex to count the number of incoming edges by
going thru each adjacency list and counting the number of times that each vertex appears
in these lists. To remove a source, decrement the count of each of its neighbors by one.
Apply the DFS-based algorithm to solve the topological sorting problem for
the following digraphs:
Generating Permutation
Decrease-by-one technique for the problem of generating all
n! permutations of{1,...,n}
Generating permutations bottom up.
start 1
insert 2 into 1 right to left 12 21
insert 3 into 12 right to left 123 132 312
insert 3 into 21 left to right 321 231 213

The advantage of this order of generating permutations stems from


the fact that it satisfies the minimal-change requirement.
Each permutation can be obtained from its immediate predecessor
by exchanging just two elements in it.
The minimal-change requirement is beneficial both for the
algorithm’s speed and for applications using the permutations.
It can be done by associating a direction with each element k in a
permutation. The element k is said to be mobile in such an arrow-
marked permutation if its arrow points to a smaller number
adjacent to it

Application of this algorithm for n=3, with the largest mobile


element shown in bold:
Generating subsets
 The decrease-by-one idea is applicable to Knapsack
problem
 All subsets of A={a1,...,an} can be divided into two
groups: those that do not contain an and those that do.
 The former group is nothing but all the subsets of
{a1,...,an-1} while each and every element of the latter
can be obtained by adding an to a subset of {a1,...,an-1}
 Thus, once we have a list of all subsets of {a1,...,an-1},
we can get all the subsets of {a1,...,an} by adding to the
list all its elements with an put into each of them.
Generating subsets bottom up.
2n subsets of an n element set A={a1,...,an}

A convenient way of solving the problem directly is based on a one-to


one correspondence between all 2n subsets of an n element set
A={a1,...,an} and all 2n bit strings b1,...,bn of length n.
24 subsets of an n element set A={a1, a2,a3,a4}
A= { a1, a2, a3, a4}
{a1, a2, a3} {a4}
{a1, a2} {a3}
{a1} {a2}
{Ǿ} {a1} {a2} {a1, a2}
{Ǿ} {a1} {a2} {a1, a2} {a3} {a1, a3} {a2, a3} {a1, a2, a3}

{Ǿ} {a1} {a2} {a1, a2} {a3} {a1, a3} {a2, a3} {a1, a2, a3} {a4}
{a1, a4} {a2, a4} {a1, a2, a4} {a3, a4} {a1, a3, a4} {a2, a3, a4}
{a1, a2, a3, a4} = 24= 16 subsets
Decrease-by-a-Constant-Factor Algorithms
Example: Binary Search
Time Complexity: Binary Search
Variable-Size-Decrease Algorithms
Computing a Median and the Selection Problem
 The selection problem is the problem of finding the
kth smallest element in a list of n numbers.
 For k= n/2: to find an element that is not larger than
one half of the list’s elements and not smaller than
the other half.
 This middle value is called the median.
 we can find the kth smallest element in a list by
sorting the list first and then selecting the kth element
in the output of a sorting algorithm whose complexity
is O(n log n).
Illustration of the Lomuto partitioning.
list of nine numbers: 4, 1, 10, 8, 7, 12, 9, 2, 15. Here,
k= 9/2=5
Since s=2 is smaller thank−1=4,we proceed with the right part of
the array

Now s=k−1=4,and hence we can stop: the found median is 8,which is


greater than 2, 1, 4, and 7 but smaller than 12, 9, 10, and 15
Efficiency of the Partition-based Algorithm

Average case (average split in the middle):


C(n) = C(n/2)+(n+1) C(n)  Θ(n)

Worst case (degenerate split): C(n)  Θ(n2)

You might also like