Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
15 views27 pages

Module 2 - Notes-Chapter

The document discusses various algorithm design techniques, focusing on brute force approaches, decrease-and-conquer, and divide-and-conquer methods. It explains exhaustive search with examples like the Traveling Salesman Problem and the Knapsack Problem, as well as sorting algorithms such as Insertion Sort, Merge Sort, and Quick Sort. Each technique is illustrated with definitions, algorithms, and example problems to demonstrate their application.

Uploaded by

Asha K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views27 pages

Module 2 - Notes-Chapter

The document discusses various algorithm design techniques, focusing on brute force approaches, decrease-and-conquer, and divide-and-conquer methods. It explains exhaustive search with examples like the Traveling Salesman Problem and the Knapsack Problem, as well as sorting algorithms such as Insertion Sort, Merge Sort, and Quick Sort. Each technique is illustrated with definitions, algorithms, and example problems to demonstrate their application.

Uploaded by

Asha K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Module 1- Analysis & Design of Algorithms (BCS401)

CHAPTER 3: BRUTE FORCE APPROACHES


BRUTE FORCE APPROACHES: Exhaustive Search.
DECREASE-AND-CONQUER: Insertion Sort, Topological Sorting.
DIVIDE AND CONQUER: Merge Sort, Quick Sort, Binary Tree Traversals, Multiplication of
Large Integers and Strassen’s Matrix Multiplication.

Exhaustive search
Q Define Exhaustive search
Exhaustive search is simply a brute-force approach to combinatorial problems. It suggests generating each
and every element of the problem domain, selecting those of them that satisfy all the constraints, and then
finding a desired element (e.g., the one that optimizes some objective function). Note that although the idea
of exhaustive search is quite straightforward, its implementation typically requires an algorithm for
generating certain combinatorial objects.
Exhaustive Search is a brute-force algorithm that systematically enumerates all possible solutions to a
problem and checks each one to see if it is a valid solution. This algorithm is typically used for problems that
have a small and well-defined search space, where it is feasible to check all possible solutions.

Exhaustive Search:
1. Traveling Salesman Problem
2. Knapsack Problem

1. Traveling Salesman Problem


Travelling salesman problem states that given a number of cities N and the distance between the cities, the
traveler has to travel through all the given cities exactly once and return to the same city from where he
started and also the length of the path is minimized.
The problem can be stated as the problem of finding the shortest Hamiltonian circuit of the graph. A
Hamiltonian circuit is defined as a cycle that passes through all the vertices of the graph exactly once.
[Travelling Salesman Problem (TSP): Given a set of cities and distances between every pair of cities, the
problem is to find the shortest possible route that visits every city exactly once and returns to the starting
point.]

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Example: Solution to a small instance of the traveling salesman problem by exhaustive search

Q. For the given graph, traveler has to start from vertex A. Find the shortest Hamiltonian circuit of
a graph

Q. Find all feasible paths for travelling salesman problem and identify the optimal path for the given
graph.

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Q. Find the shortest Hamiltonian circuit of the graph (find the shortest possible route for traveling salesman
problem by exhaustive search)

2. Knapsack Problem (0/1 Knapsack Problem)


The Knapsack problem is an example of the combinational optimization problem. The problem is
maximization problem as mentioned below:
Given N items where each item has some weight and profit associated with it and also given a bag with
capacity W, [i.e., the bag can hold at most W weight in it]. The task is to put the items into the bag such that
the sum of profits associated with them is the maximum possible.
Note: The constraint here is we can either put an item completely into the bag or cannot put it at all [It is
not possible to put a part of an item into the bag].

Q. A traveler has a knapsack with maximum capacity of 10 kg and 4 items. Weights and profit for
every item is given find the optimal or feasible solution.

Solution:

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

{3, 4} is the most feasible solution, because by selling item 3 and 4, sales man will get a maximum profit of
65 units.
Q. A traveler has a knapsack with maximum capacity of 20 kg and 3 items. Weights and profit for
every item is given find the optimal or feasible solution.

Solution:
Total number of possible solutions = 2^3 = 8

{2, 3} is the most feasible solution, with maximum profit of 35 units.

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Solution:
Total number of possible solutions = 2^5 = 32

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

DECREASE-AND-CONQUER: Insertion Sort, Topological Sorting.


Q. Define Decrease-And-Conquer
Decrease or reduce problem instance to smaller instance of the same problem and extend
solution. Conquer the problem by solving smaller instance of the problem. Extend solution of smaller
instance to obtain solution to original problem. Basic idea of the decrease-and-conquer technique is based on
exploiting the relationship between a solution to a given instance of a problem and a solution to its smaller
instance.
Decrease and conquer is a technique used to solve problems by reducing the size of the input data at each
step of the solution process. The technique is used when it’s easier to solve a smaller version of the problem,
and the solution to the smaller problem can be used to find the solution to the original problem.

There are three major variations of decrease-and-conquer:

1. Decrease by a constant
2. Decrease by a constant factor
3. Variable size decrease
Decrease by a Constant: In this variation, the size of an instance is reduced by the same constant on each
iteration of the algorithm. Typically, this constant is equal to one, although other constant size reductions
do happen occasionally.

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Below are example problems:


 Insertion sort
 Topological sorting

Q. Design the insertion sort algorithm and explain.


Insertion Sort: Insertion sort is a simple sorting algorithm that works by iteratively inserting each element
of an unsorted list into its correct position in a sorted portion of the list. It is a stable sorting algorithm,
meaning that elements with equal values maintain their relative order in the sorted output.
[Insertion sort is based on decrease and conquer design approach. Its essential approach is to take an array
A[0..n-1] and reduces its instance by a factor of 1, Then the instance A[0..n-1] is reduced to A[0..n-2]. This
process is repeated till the problem is reduced to a small problem enough to get solved.]
Consider an array having elements: {23, 1, 10, 5, 2}

Pseudo code of this algorithm.

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Iteration of insertion sort: A[i] is inserted in its proper position among the preceding elements previously
sorted.

The basic operation of the algorithm is the key comparison A[j] > v. The number of key comparisons in this algorithm obviously
depends on the nature of the input. In the worst case, A[j] > v is executed the largest number of times, i.e.,

for every j = i - 1, ... , 0. Since v = A[i], it happens if and only if A[j] > A[i] for j = i- 1, ... , 0.
Thus, for the worst-case input, we get A[O] > A[1] (for i= 1), A[1] > A[2] (for i= 2), ... , A[n- 2] > A[n -1] (fori= n -1). In other
words, the worst-case input is an array of strictly decreasing values. The number of key comparisons for such an input is,

In the best case, the comparison A[j] > v is executed only once on every iteration of the outer loop. It
happens if and only if A[i - 1]::: A[i] for every i = 1, ... , n - 1, i.e., if the input array is already sorted in
ascending order.
Thus, for sorted arrays, the number of key comparisons is,

Q. Sort the following set of elements using insertion sort 7, 21, 0, 31, 13, 8 , 10

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Topological Sorting
Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such that for every
directed edge u-v, vertex u comes before v in the ordering.
Note: Topological Sorting for a graph is not possible if the graph is not a DAG
Topological sort is performed for directed acyclic graphs (DAGs), and it aims to arrange the vertices of a
given DAG in a linear order. Thus, a DAG is an essential and necessary condition for topological sort. What
is a DAG? A DAG has no path that starts and ends at the same vertex. A sample DAG is shown below in
Fig.

A topological sorting of the following graph is “5 4 2 3 1 0”. There can be more than one topological sorting
for a graph. Another topological sorting of the following graph is “4 5 2 3 1 0”.

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Q. Generate the topological sequence for the graph using source removal method

Q. Generate the topological sequence for the graph using source removal method

Q. Generate the topological sequence for the graph using source removal method

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

DIVIDE AND CONQUER:


Q. Explain divide and Conquer technique with general algorithm (Feb-2021 – 08 Marks)
Given a function to compute on n inputs the divide-and-conquer strategy suggests splitting the inputs into k
distinct subsets, 1< k < n, yielding k sub problems. These sub problems must be solved, and then a method must
be found to combine sub solutions into a solution of the whole. If the sub problems are still relatively large, then
the divide-and-conquer strategy can possibly be reapplied. Often the sub problems resulting from a divide-and
conquer design are of the same type as the original problem. For those cases the reapplication of the divide-and-
conquer principle is naturally expressed by a recursive algorithm.

General algorithm:

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Divide-and-conquer is best-known general algorithm design technique. Divide-and-conquer algorithms work


according to the following general plan:
1. A problem's instance is divided into several smaller instances of the same problem, ideally of about the same
size.
2. The smaller instances are solved (Recursively, though sometimes a different algorithm is employed when
instances become small enough).
3. If necessary, the solutions obtained for the smaller instances are combined to get a solution to the original
instance.
In case of divide-and-conquer, a problem's instance of size n is divided into two instances of size n/2. More
generally, an instance of size n can be divided into b instances of size n/b, with a of them needing to be solved.
(Here, a and b are constants; a >= 1 and b > 1). Assuming that size n is a power of b, we get the following
recurrence for the running time T(n):

Where f(n) is a function that accounts for the time spent on dividing the problem into smaller ones and on
combining their solutions.
The order of growth of it’s solution T(n) depends on the values of the constants a and b and the order of growth
of the function f (n).

Merge sort
Q. Design an algorithm for performing merge sort, analyze it’s efficiency. Apply the same to sort the
following set of numbers 4, 9, 0, -1, 6, 8, 9, 2, 3, 12 (March 2022 – 10Marks)
Mergesort is a perfect example of a successful application of the divide-and-conquer technique. It sorts a given
array A[0 .. n - 1] by dividing it into two halves A[0 …. 𝑛2−1 ] and A[ 𝑛2 ... n -1], sorting each of them recursively,
and then merging the two smaller sorted arrays into a single sorted one.

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Algorithm Efficiency Analysis: Number of key comparisons C(n) is,

Let us analyze Cmerge(n), the number of key comparisons performed during the merging stage. At each step,
exactly one comparison is made, after which the total number of elements in the two arrays still needed to be
processed is reduced by one. In the worst case, neither of the two arrays becomes empty before the other one
contains just one element. Therefore, for the worst case, Cmerge(n) = n - 1, and we have the recurrence.

In fact, it is easy to find the exact solution to the worst-case recurrence for n = 2k

Merge sort operation on the list 8, 3, 2, 9, 7, 1, 5, 4


Apply the merge sort algorithm to sort the following set of numbers 4, 9, 0, -1, 6, 8, 9, 2, 3, 12:

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Quick Sort
Q. Design an algorithm for performing Quicksort, analyze it’s efficiency.
Quicksort is another important sorting algorithm that is based on the divide-and-conquer approach. Mergesort,
divides its input's elements according to their position in the array, quicksort divides them according to their
value.
Specifically, it rearranges elements of a given array A[O .. n - 1] to achieve its partition, a situation where all the
elements before some position ‘s’ are smaller than or equal to A[s] and all the elements after positions are greater
than or equal to A[s]:

After a partition has been achieved, A[s] will be in its final position in the sorted array, and we can continue
sorting the two sub arrays of the elements preceding and following A[s] independently.

First element: p = A[l].

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

If scanning indices i and j have not crossed, i.e., i < j, we simply exchange A[i] and A[j] and resume the scans by
incrementing i and decrementing j, respectively:

If the scanning indices have crossed over, i.e., i > j, we will have partitioned the array after exchanging the pivot
with A[j]:

Finally, if the scanning indices stop while pointing to the same element, i.e., i = j, the value they are pointing to
must be equal top. Thus, we have the array partitioned, with the split positions = i = j:

We can combine the last case with the case of crossed-over indices (i > j) by exchanging the pivot with A[j]
whenever i >= j.
Pseudo code for implementing the partitioning procedure:

Quicksort's efficiency: Key comparisons made before a partition is achieved is n + 1 if the scanning indices
cross over, n if they coincide. If all the splits happen in the middle of corresponding subarrays.
We will have the best case: the number of key comparisons in the best case will satisfy-the recurrence.

The total number of key comparisons made will be equal to,

Q Sort the key words “ALGORITHM” by applying quicksort method.

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Q Sort the set of elements by applying quicksort method. 5, 3, 1, 9, 8, 2, 4, 7

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Q Sort the set of elements by applying merge sort technique. 60, 50, 25, 10, 35, 25, 75, 30

Phase I: Divide
Divide the Array into
sub arrays

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Phase II: Conquer


Sorting & Merging
the sub arrays into
single sorted array.

DIVIDE AND CONQUER: Binary Tree Traversals.


Q. Explain Binary tree traversal.
Divide-and-conquer technique can be applied to binary trees. A binary tree T is defined as a finite set of
nodes that is either empty or consists of a root and two disjoint binary trees T L and TR called, respectively,
the left and right subtree of the root. We usually think of a binary tree as a special case of an ordered tree.
Many problems about binary trees can be solved by applying the divide-conquer technique

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Standard representation of a binary tree


As an example, let us consider a recursive algorithm for computing the height of a binary tree. Recall that
the height is defined as the length of the longest path from the root to a leaf. Hence, it can be computed as
the maximum of the heights of the root’s left and right subtrees plus 1.
Recursive algorithm for computing the height of a binary tree

Binary tree and its traversals

The most important divide-and-conquer algorithms for binary trees are the three classic traversals: preorder,
inorder, and postorder. All three traversals visit nodes of a binary tree recursively, i.e., by visiting the tree's
root and its left and right subtrees. They differ just by the timing of the root's visit:

 In the preorder traversal, the root is visited before the left and right subtrees are visited (in that order).

 In the inorder traversal, the root is visited after visiting its left subtree but before visiting the right
subtree.

 In the postorder traversal, the root is visited after visiting the left and right subtrees (in that order).

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Multiplication of Large Integers

Strassen’s multiplication.
Q. Explain Strassen’s multiplication (Jan/Feb 2021 – 04 Marks)
Strassen's Matrix Multiplication is the divide and conquer approach to solve the matrix multiplication problems.
The usual matrix multiplication method multiplies each row with each column to achieve the product matrix. The
time complexity taken by this approach is O(n3), since it takes two loops to multiply. Strassen’s method was
introduced to reduce the time complexity from O(n3) to O(nlog 7).
Strassen’s Matrix Multiplication Algorithm
In this context, using Strassen’s Matrix multiplication algorithm, the time consumption can be improved a little
bit. Strassen’s Matrix multiplication can be performed only on square matrices where n is a power of 2. Order of
both of the matrices are n × n.
Strassen suggested a divide and conquer strategy-based matrix multiplication technique that requires fewer
multiplications than the traditional method. The multiplication operation is defined as follows using
Strassen’s method:

C11 = S1 + S4 – S5 + S7
C12 = S3 + S5
C21 = S2 + S4
C22 = S1 + S3 – S2 + S6

Where,

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

S1 = (A11 + A22) * (B11 + B22)


S2 = (A21 + A22) * B11
S3 = A11 * (B12 – B22)
S4 = A22 * (B21 – B11)
S5 = (A11 + A12) * B22
S6 = (A21 – A11) * (B11 + B12)
S7 = (A12 – A22) * (B21 + B22)
To compute AB using (3.12), we need to perform eight multiplications of 𝑛2 X 𝑛2 matrices and four additions of
𝑛2 X 𝑛2 matrices. Since two 𝑛2 X 𝑛2 matrices can be added in time cn2 for some constant c, the overall computing
time T(n) of the resulting divide-and-conquer algorithm is given by the recurrence,

Where b and c are constants.


The time for the resulting matrix multiplication algorithm T(n)= Ơ(n3). Hence no improvement over the
conventional method has been made. Since matrix multiplications are more expensive than matrix additions
Ơ(n3)versus Ơ(n2).
We can attempt to reformulate the equations for Cij so as to have fewer multiplications and possibly more
additions. Volker Strassen has discovered a way to compute the Cjj's of (3.12) using only 7 multiplications and
18 additions or subtractions.
His method involves first computing the seven 𝑛2 X 𝑛2 matrices P, Q, R, S, T, U, and V as in (3.13), Then the
Ci,j’s are computed using the formulas in (3.14). As can be seen, P, Q, R, S, T, U and V can be computed using
7 matrix multiplications and 10 matrix additions or subtractions. The Ci,j's require an additional 8 additions or
subtractions.
The resulting recurrence relation for T(n) is

Where a and b are constants.


Q. Apply the Strassen’s multiplication algorithm to multiply the following matrices and show the details
of the computation

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Q. Apply the Strassen’s multiplication algorithm to multiply the following matrices and discuss how this
method better than direct matrix multiplication method.

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Q. Apply the Strassen’s multiplication algorithm to multiply the following matrices

Q. Apply the Strassen’s multiplication algorithm to multiply the following matrices


(Aug-Sep 2020 – 10 marks)

Q. Problem: Multiply given two matrices A and B using Strassen’s approach, where

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Solution:

Dept. of AIML, GMIT, Davangere


Module 1- Analysis & Design of Algorithms (BCS401)

Dept. of AIML, GMIT, Davangere

You might also like