Gr
eedy Method, Basic ‘Traversal and
Search Techniques
SYLLABUS
Greedy Method : General Method, Applications ~
Job Sequencing with Deadlines, Knapsack
Problem, Minimum Cost Spanning Trees, Single Source Shortest Péth Problem
Basic Traversal and Search Technique:
lechniques for Binary Trees, Techniques for Graphs,
Connected components, Biconnected components.
LEARNING OBJECTIVES
Concept of Greedy Algorithm
Applicatic
1s of Greedy Method such as Knapsack Problem and Job Sequencing with Deadlines
Concept of Spanning Tree and Minimum Cost Spanning Tree
Various Algorithms such as Prim's Algorithm and Kruskal's Algorithm
Single Source Shortest Path Problem.
Various Trees Traversal Methods
Different Graph Traversal Techniques.
Overview of Connected an Biconnected Components.
INTRODUCTION
The greedy method is considered as @ most powerful and straightforward technique in designing
an algorithm, Almost, a wide variety of problems can be solved using this technique, These
problems hold 'n' inputs that require the programmer fo obtait a subset of each set of inputs
which satisfic ‘constraints. This subset is called as feasible solution. A Knapsack problem
opps sc loi Mo solve maximum values from thelist of item. The resultant knapsack
lies gree
should contain maximum of optimal valves:
‘an be-defined at @ spanning tee, containing welghts and lengths
or aight ofthe tree Ts Kept minimum. Here, the two algorithms used
oierin's and Kruskal'salgorihm. A single source shortest path is
i ‘pajh between the source vertex and other vertices is
Minimum spanning tree ©
of the edges and the total
for minimum spanning tree &
defined as a problem wherein shortest
determined: . :
in a computer. Therefore, some procedures are required
Trees are 10 store information It rms of visiting each vertex of node of a tree in
foray he afrton 22 etal ap aveaing nechnin
some specifie order 50 as to acre
Anyone found gully fs LIABLE te face LEGAL proceed
@ scanned with OKEN Scanner130 DESIGN AND ANALYSIS OF ALGORITHn.
Eee a
4.1 GREEDY METHOD pee 2 4S sie
4.1.1 General Method” o See hncione ne oe
Q1. Explain the terms feasible solution, optimal solution and o}
Answer:
Feasible Solution i
A feasible solution can be defined as a subset of a solution which satisfies restrictions applied to thy
problems. ' =
* -
‘A solution in which the values of decision variables that satisfy all the constraints of a Linear Programming
(LP) problem is known as feasible solution. . 5
Usually, feasible solutions are present in the feasible region and a linear combination of one or more basic
feasible solutions may result in an optimal solution.
Optimal Solution ‘
A solution that satisfies all the constraints of LP problem with high profit and low cost is known as an.
optimal solution
a or
‘An optimal solution is a feasible solution that maximizes and minimizes the'given objective function, in
general, every problem will have a unique optimal solution.
Properties of Optimal Solution
1. “Ttlies on the boundary of the feasible region, which indicates thatthe interior points of the feasible solution
region are neglected while searching for an optimal solution,
It lies at the end of the feasible region, which specifies that the work of the search procedure is reduced
‘while searching for an optimal solution,
Objective Function
3,
AA fiinetion.that is used to determine a good solution for an optimizing problem is known as an objective
Qn,
The equation f= ax + by + ez +...
fun
: + ... with linear constraints is the objective function, The maximum 6
‘minimum value of the objective function isthe optimal value which collectively generate an optimal solution.
2. What is greedy method? Explain with example.
} (OR) .
Explain the general method of Greedy method.
|. 5 ug ISep.-24(R18), 7)
= Aug /Sep-21(R18),
Greedy Method
is called as feasible solution,
Apparently, the user has to
sion poser. Ie
empty st. Inthe ist stages th
tothe lis. This process goes on vat
consricted optima solutions only posse
ible, if that,
Process is called subset paradigm of Particular input gives a fe:
the greedy technique.
Sl
4 SIA PuBLisnERS and DISTRIBUTORS PVT. Lib.
@ scanned with OKEN ScannerConsider the example of cnn weseauent step then that choice isnot altered
sian Changing problem, suppose that a change is required for an
Further the available denominations are 1, 5 and 10. A gre
st denomin lable, ‘The rationale behind this rute is that using large
es the goal. A faster than using small denominations. As an example, using
8, one would first choose a coin of denomination 10. Itis needed to make change
Since it is required to make change for A = 3.
gorithm would thus use five coins to make
mount by an
.edy algorithm
‘avsing the fewest number of coing
might se the rule, €0 select the large
denominations tends t0 advance towa
furle to make change for A
3. Thus, the next step
Algorithm
greed_coin_change(denomtn, X)
AThis algorithm makes change for an amo
didenomtn [1] > denomtn{2] >... > denomtn [3] >..
. Minput parameters : denomtn, X
Output parameters : None
oins of denominations
denomta {n}
{
3 x
while (X>0)
{
Z= Xidenomtn{x]
print (“use” +Z +"coins of denomination” + denomtn{x))
X=X-Z x denomin{x]
xexel
)
4 f
“The worst case time per the above algorithm is O(n) since the while loop terminates n times in the worst ease.
Advantages
|. "The greedy algorithms are easy to invent, easy to understand, easy to code,
2. “They are straight forward and most of the time show efficient results.
Disadvantay , -
“Thee is na guarantee for obtaining optimal global solution even after performing locally. optimal
‘improvements in locally optimal solution. i :
3. Write control abstraction of greedy mam : Mayldune-t9(R16), 44h)
tion for the subset paradigm.
method control abstract
|. Describe Greedy i oneee
| Answer :
subset paradi i
t Algorithm greedy (x, 9)
| x{1 : n} holds ‘n’ no. of inputs
k (
; jf feasible (solut einen
+ = union (solution. 9)
solution #= uni
} :
rejurn solutions
jons such as select, the feasible
The oe given code describes IEE functi |
@ scanned with OKEN Scanner132
Greedy selection function performs 10 OPE
@ —Ttselects an input from x{ ]
GH) Tedetetes an input from xl].
After this, the value selected is then assigne'!
The feasible function can be described as a book
‘be added to the solution vector
The union function operates by combining
function,
‘The Greedy function specifies the way in whieh
feasibl
functions such at di
Q4. Differentiate between divide and con’
Answer:
sy" with th
h the
quer algorithm and g
GN AND ANALYSIS OF ALCOR,
toy function “that calculates Whether oro,
anvaled
sstson an ten performs UPON on hey
gorithm appears after the implementation o 5
reedy algorithm.
Greedy Approach
Divide and Conquer Approach
1. ] Divide and conquer approach is the result
iy Gieedy method, there are some chances of pay
‘an optimal solution to a specific problem.
‘oriented approach.
2. | The time taken by this algorithm is efficient
when compared to greedy approach,
“The time taken by this algorithm is not that muchetfcan
when compared to divide and conquer approach.
3. | This approach doesnot depend on constraints,
to solve a specific problem,
This approach cannot be used further, if the sub]
chosen does not satisfy the specified constraints.
4] This approach is not efficient for larger
problems,
‘This approach is applicable and as well as efficient fra
wide variety of problems.
5. | Asthe problemis divided into alarge number
‘of subproblems, the space requirement is
very much high.
5. | Space requirement is less when compared tothe dv
and conquer approach.
6. | This approach is not applicable to problems
which are not divisible.
. | This problem is rectified in the Greedy approach.
1.2 Applications
single machine for processing jobs,
‘A subset ‘J" chosen from the given set of j x
in this subset is completed by its deadtine. The one SP ibe he
feasible
of the jobs in ‘J
ie. BP
processed on « machi
ISS ei
Jobs with deadlines and profits.
May/une-19(R16) 28
Novidec.t9(Ri8hO""
Boo 17 R15) 08
has profit of, Howevsh!
OF time with the availability of
feasible solution if and only fe
: mn if and only ieee
Solution can be obtained by adding the
8
the possible pernutations of th)”
‘Out violating the deadlines:
@ scanned with OKEN Scannerunit
auger!
reedy Method, Basic T
‘ faversal and Search Techniques =
am
job sequencing with deadline and profits algorithm i
f as follows,
1. Algorithm JS(D, J, n, K) on
2, Begin
Rey. Integer D[O:n}, J10:n}, i,K.n,8;
4 DIO}=0; 310}—0; Moitialize
‘ Keb JAK Muclude jodt
“ fori-2 to ndo ‘i
P*Consider jobs in decreasing order of P,*/
i SeK;
2 while (DU[S)>DIi] and DU[S)=8) do
9. SHS-1;
10. done
i" if DUIS) < Df) and Dfi}>r) then
4 Minsert job i into 3
2. for 1K to SHI step-I do
“4 SIU: j
14 done ;
15 JSH]
2, io
oe 3. 15
10.
7 A 7
Begin with = 4 and 3 P,=0. eames
ob 1s added a thas the gest profit and J (1)
4 sala easble New, os 3s conic and disrded at aa otried Teg A cons 4
aan acters seared a5 J= (1,2, 4) is nota feanine ig 8 HOt # feasible solution. Next, 02
them in J, such that the jobs included in J must ‘anise within tote Se ns tlted to =
solution. A feasible solution with maximum profit eared ig, mee oe® aN Such “Ji said to be the fs
"ere!
: 1S said to be optimal solutic
Inthe above problem, J= 1,4) i the only feasibeseltagn mare
: is
the optimal solution for the given problem instance maximum 2, Therefore J= (14)
10, Wiite in detall about fester job sequencing algortnma
Answer: sn llaci \
Faster Job Sequencing albrithm makes se of both the weigh te
performance of the original job #4¥eNsing algorithm, The time eqn’ THe tnd collapsing rules to improve ™
On a (2n.n)). The faster job Sequenéine algorithmy is piven belgys XY Tor this faster version of algo
@ scanned with OKEN ScannerGreedy Method, Basic Traversal and Search Techniques 137
FasterJS(D, n,
io)
I There are t + 1 trees (ie., single node)
C
" ) forx-Otot
do
. Hiitialization
fix] Uy then break; i
H, xfi}:= 1.0; UU: =U-wli;
2}
13. if @$n) then xfi] := Ulw fi
4)
The above algorithm, functionality is based on how the objects are arranged in the form p/w. This is
"ting but the ratio of sum of profits of weights ic., p; divided by sum of weights i.e., w. Hence, by using this
|x. the time taken to sort or arrange the object by this algorithm is O(n).
then fractiohal greedy Knapsack algorithm generates
of the fractional Knapsack problem.
(OR) ‘
imal solution to the given instance of knapsack
ranged in non-increasing order.
(88. Prove that if pylws 2 PalW2 2-2 PrlWn,
| 1 optimal solution to the given instance
| Prove that greedy knapsack generates opt
Problem, when the profit-weight ratio is an
ene Mh ;
© FR M (Rnapsack capacity).
Gil) TEA > J, then Ew, 7, > Af, which is impossible
Therefore, 5; n.
Now, consider the situation of increasing the value of ry 80 as to make it eau 0 Then bv iti
required (0 decrease any or all of the values Of (Py 4 x14 Sofa) as necessary 80 thal NPACILY Used is yy
‘changed. This leads to a new solution T'= (ty, fyyuns fy) with f= 3, for 164s kand
eM A) = Wan)
Then forthe solution 7, the evaluation of, ©) pis as follows,
Epi = Zpintn—mdwi(BE) — 2 (n= iw)
1d din
Pi > Pi
2B Zpnt [ummm ZG ame fe wee mv < |]
© FP . ‘
TEE pi,>E pir; then ¥ cannot be an optimal sohition. :
1p, t;= Epi; then either 7 = X and X is optimal or T'* X and eventually it can be concluded that éiter
is not optimal and X is optimal.
Q14. Find an optimal solution to the knapsack instance n= 7, m= 15 (Py, Pz, Ps, Pa Ps, Pe, P;) =
(10, 5, 15, 7, 6, 18, 2) and (wy, wz, Ws, Wa, Ws, Wey Wr) = (2, 3, 5, 11,4; 1) *
Answer:
Given that,
n=7
m= 15 A ‘
(P1.P2,P3PiPssPosP;) = (10, 5,(15,7, 6, 18, 3)
(W,,Wa,Wa,WssW3,We.W) = (2, 3,5, 7, 1, 4, 1) *
Where, :
(Py, W1) = (10,2)
‘ (Ps, Wo) = (5,3)
(Ps, Ws) = (15,5)
(Ps, Ws) = (7,7)
(Ps, Ws) = (6,1)
(Po, We) = (18, 4)
(Px Wy) = B, 1)
Solving this problem involves the following tree steps, :
1. Addition Operation
Sp=S'+ (PW) ‘
2. Merging Operation
sit eslusi oN
3.. Purging Rule or Domiinance Rule
Take any two’ sets 5 and. S| eof er of Pa (PW) and (Py Wi). The purging rate states that if P)$
‘tule must be applied after performing the ‘merging operation.
2
ERS AND DISTRIBUTORS PVT.LTD. sD
@ scanned with OKEN Scanner, Greedy Method, Basic Tiaversal
and
Starch Techniques ei
gf = (10.29)
Now. a
S"US? (Merging operation)
(0.0) }U{ (10.2))
(0,0),(10,2))
145.3) (Addition opera
(0,0),(10,2)}+((5,3))
.3)(15,5)}
Now,
: t
‘1uS}- (Merging operation)
= ((0,0)(10,2) }U(,3),(15,5)}
(0,0),(5,3),(10,2),(15,5)} ip i
by using the purging rule, the tuple (5,3) is deleted from $2 This is beeause, consider (P, W,) = (5,3) and
1p.) (10, 2). Here P; § Py and W,3 > W;is satisfied ie. 5 < 10 and 3 > 2.
S° = {(0.0),(10,2),(15,5)}
S? = S?7+(15,5) :
S? = {(0,0)(10,2),15,5)} + {(15,5))
S} = {(15,5),(25,7),(30,10)) .
Now,
S = SUS?
= {(0,0),(10,2),(15,5)}+{(15,5),25,7),G0.10)}
$3 = ((0,0)(10,2)(15:5)(25,7),30,10))
S$= 80D ~
{(0,0),(10,2),(15,5).(25,7).8010)+(0-D)
(y Da (17, ,9),(22,12),(32, 14). 37.17)
ojutaner 9)(22,12)(82,14)(37.10)
15,5).(25,7)(30,1
Fens 292°12),(25,7)430,10)32,14),3717)}
| S4= (0),0%),40.2).05.9,07 9).
oe sid (22, 12) are deleted from S*, Since Fy < Ps
and W;2 W, are satisfied.
| os )),(10,2),05,5) [email protected])
0,10) (32,14)37.1 D4 (6D)
7.
wale 1 ORE .18))
10),31,8).3
12,14) TI TIVUG.1)A10.9).2 1 on
vg 2t.or (23,10)(25..8 10),
TF LIABLE to face LEGAL proceedings.)
k
@ scanned with OKEN Scanner142
1n))
15).
1 ICMR.
(0.0146,1,410,29416,3,421 6425: in
Sie an, 15).44
ae HALO ASAHI HT Gi a9)
remsnemarcesaiengs nati :
1,0),25,7,31,8),86,
19,061.22)
(AKIN ALTHOHD NSB 84. 9
HA K 19)
By using purging rule tuples (21, 4
41,8), (38, 15) (43, 18) are deleted from
2.31, WD, CBs
#85 (0.0),(6.1),(10.2),(16,3) 18.
(36.11).039,10) 43,11),
16,19) (01,22) )
357),
= (0.0) ,(6.1),(10,2),(16.3) (184 )(31,6),(24.5)
28.6).31,8),84,7),3611)
#8.15),(69,10)4
(48.18).(49,12),(84,15) (56,19) 61,22) }
+G0)
)
(0.0) (6,1) 10.2),(16,3),(184),(24,5) (28,6) (34,7),
(36.11) (89,10),(43,11),49,12),(54,15).(56,19),
(61,22) }+}@,1)) *
Se {6.1)..2).(13,3),194),21,5),276),817),478),
39,12),(42,11 )9(46,12) (52,13),(57.16) (59,20) (64,23) )
S7 = Sy Sf 2
= (21 a 192).063) (184.0243) ,286),8.7) 0464 I ),39,10),
C1) o18) ae LEP ONE ag
a PPB 46.12),62.15) er oe
¥= (09).,1),6,1),6,
(27,6),
20) (64,23) }
7).(10.2).(73,3),(16,3) (14) (4
61.7),64,7),6611 07,
9.4),21 9):024,5),246),28,6),
: 9.69.10). 8.12), 48.11) 43,1) a
418) (3619) (57.16) (59.20).6132),
By using the it 1 rl
(46, aig sng a ap
-12),(49,12),(52,13),
(64,23) ) '
(9, 2), (13, a
ae deleted from 13:3). (18,4), 24, 9+ 27.6),31,7), (36, 1 1, 9, 12), (42, 1),
ST = {,0),03, 1),(6, 1,(102),463),19,9 5
57.16) (99 Lo P19-4N(24,5), 08,
SHIT san BN (28, 9134.00.87.8,.39,9)¢43 11),(49,12),(62,19),
Now for m= 15, we wily Search for a ty Si
iple in wh a i
mon Simwhich the value of ig 15. Thus,
© (PAW) © get then
ie otherwis
Be x, = |
(54,15) & st
(54,15) & 56,
Hence, set
(94,15) © 6
@ scanned with OKEN Scannerr Me ee
Greedy Method, Basic Traversal and Sp;
ooo arch
2 (54.15) — (PosWo)_ [v (PW) = (Pp,
54,15) ~ (18) i
(36,11)
2 (611eS%
(36,11) ¢S*
WW)
Next object
= 36,11) = (PW)
86.11) =)
(30,10)
= GOIN) Est
Next object
(30,10) ~ (P,W3)
(30,10) — (15,5)
= (155)
(15,5) € S?
(15,5) ¢S*
Next object
(15,5) - (Pa.Wa)
(15,5) - 63)
= (10,2)
(10,2) 5" a
= (10,2) ¢5°
met
Hence, the optimal solution is,
(x1, x2, x3, x4,
Mi
ea i
O15, What is tree, spanning tree? Explain with an example.
~ (OR)
Define tree.
(Refer Only Topic: Tree) Apet-18(R15), Q5(0)
‘Answer: °
a {| manner. It contains a finite set of elements,
presented ina hierarch
Atte’ linear data structure Fe}
epee ine ng a finite set of di
Called ‘nodes’. These are cénnected to each other
Example
od lines called “branches
Figure: Tree
Sa ERIN Ror ody LIABLE ee LEGAL proceedings)
(Cree erring
@ scanned with OKEN ScannerYan AND ANALYSIS OF. ALGORITHNs
144 oe
poe ‘eas cal ge
abe ne sealed a spanning tee of GHC AHS the oy
For acu th G. a subgraph Tof G is alle
conditions
all vertices of G without any circuits or loops. inal vrices of G7 msi,
tice T of a graph Gis a subgraph of G sich oe ae the edges are called branches
a Secong tee sip led os a ead = 1 edges. If suppose 7.
en terse then a eating te of sould have Yess SE 1G. A et fat
ning tree of a graph G. then the Ss OG whl se a a a noted by 7. Basic
of G is the complement of T in G. This is called a A gaphs Inadisconected graph ofn vertices its not psa?
“Pinine ues canbe const for Paes eee 2h sping
Coe When these ela ives ate combined together, spanning forest is formed. Every tree js ;
Spanning tree of itself. A connected graph can have more than one spanning
Since. a spanni
216. What is ee aS a ee
spanning tree and list some applications of it. ‘Dec BRIG), og
Answer:
Minimum Cost Spanning Tree
A tree can be defined as an undirected, acyclic and connected graph. The graph is basically a pair of vertices
‘only one path connecting each pair of vertices. .
mesons san undirected, connected sraph anid spanning tee can be defined asa subgraph G which isa
ining all the ventices of G.
4 minimum spanning tree canbe defined asa spanning tee, containing wei
4nd the total weight of the tree (the sum of the wei
\lgorithm for Minimum Cost Spanning Tree
Jihe Prim'solgonthm is very much similar to Djksta's algorithm for finding shortest paths. The algorithm
is described as follows,
Initially ‘an arbitrary nod
ights and lengths with the edges
ights ofits edges) is at a minimum.
Orit el as & tee roo (ie. ree root can be any node in-undieeled graph and
iMate tee) and then the nodes ofthe graph are then added co
the tee one ata time. This process continues uni)
«llthe nodes of the graph are added to the ree
‘The graph node which is adde,
Separated by an ate of minimum wei
selcag ats: when all the nodes ofthe graph are includes into the tree, a
completed, :
Prim’s Algorithm for \
Input
inimum Spanning Tree ci
‘A connected graph G vy
‘on-negative values assigned to e
Output Pena
A minimam spanning tree for 6.
Method
Step 1
Selecta venta vy of G, Let Ve (vband =|)
Step 2 ;
Select a nearest neighbour y
8 cireuit with members of £
= for which the edge (y,: ¥y) does not form
@ scanned with OKEN Scannered Greedy Method, Basic Traversal and Search 1 i
eal
with an example of Prim's. algorithm,
145
ov Doe,-18(R16), 7(b)
u
Se Algorithm
Gey
rent beg site ak {igorithm which ses the predy method inorder to find the minimum spanning re.
Fi Ee ay Om p a uly selecting a vertex as a tree node, It then conneets the vertex edge with another
1 ee the noes orth cae® NeIBhIs of cach vertex that is connecting it. After this, the nearest vertex is
pt contiines vatll - 21eph are then added to the tree (one at atime) by following the above process
isprooes all the nodes of the graph are addec e obtained tree is considere
Be ranimmom spanning tree, ‘raph are added to the tre. Hence, the obtained
‘The steps to find minimum spanning tre us
Select a vertex v of G. Let V= (v4) and &.
ng Prim’s algorithm are as follows,
sep Ht 1
sep 2 Select @ nearest neighbor v, of V that is adjacent to vj, y EV and for which the edge (vj, vp) does not form
“jasit with members of E. Add y, to Vandy,
sep 3: Repeat step 2, until £1 =n ~ 1. Then Vontains all n vertices of G and £ contains the edges of a minimum
gunning re for : ?
¥) 6 E respectively.
pseudocode for Prim’s Algorithm
Prim’s algorithm for the graph's that are represented by adjacency listé is given below,
|, Procedure prim (Geog) //Vn is vertices of minimum
spanning tree
2, Initialize Ey tO (Vo) Ems is the edge of minimum
spanning tree
Initialize By t0
for
Determine a min-weight edge e*
Vint — Vint U (0)
Epa © Bp U (0%)
toll—1do
(v*,u#) from all the edge (v, u) , where v is in Vins, and-d is in ¥— Viast
Tetum Ene
‘The above pseudocode is used to construct a minimum spanning tee using prim’s method, The working of
rin’s algorithm is as follows,
This algorithm takes a weighted connected graph Gao its input and returns a set of edges which form the
ninimum spanning tree of the graph Gcon- Here; Gro is given a8 Gaon = (Vs E}-
‘Time Complexity
Inthe above algorithm, the outer for loop executes
Complexity of PRIM’s algorithm is,
Tiny = (n= Dn
Tn)
~ tines and inner for loop executes 1 times, so time
nan
On),
Erample
For answer refer Us
me ansyhen et "2 weights will be done by the minimum spanning tree
rl
ow many compan of tt andec gar rn verter tae
algorithm, in total, .
Start vertex? ‘i :
i Answer:
| Co lowing graph
E eaidet ihe foe SGuKls + ERIPINAL 26 Anyone fund guilty in LIABLE to face LEGAL proceedings
| ering cerocPhorocepying of re
@ scanned with OKEN Scanner» Figure (1: Connected Weighted Graph G
‘Step 1
Start with the first vertex ie, vj. The edge
‘weights of this vertex are compared ic, is compared
With 3, Since, 1 is smaller than 3, the edge{vi, v3) is
select,
@
“Figure 2)
Step 2,
Now, consider the vertex vy The edge ‘weights of
this verex are compared and the edge with the smallest
Weight is selected ic. edge (vs, vs)with Weight 2 is
selected,
GN AND ANALYSIS OF ALGORITH tas
DESI
Figure (4)
Step 4
‘The edge weights of vs are compared and the
edge (v4, 5) with weight 4 is selected even thogy
edge {vg_ vj} has the smallest value ie., 1. This i
because ifthe (v4, v1] is selected, then the graph form
cycle, which is not allowed in a spanning tree,
: Figure (5)
Step 5
The edge
‘weight of vg are compared and the
edge (vs, v6) wit > Ip
ith weight 3 is selected:Greedy Method, Basic Traversal a
yt Ind Search Techniques “
‘The resulting graph obtained is a
om a5 tol that is, orn = 6 veces ye oe which he comprzons made by te wee
setae eaph win vres and a compationste done ene, ihe na
ets sun spanning te sprit, ng es em como Fae wes WM
ae. wee Prim’s algorithm under the assumpti
ion that the graphs are represented by adjacency
answer :
sjed Implementation of Prim’s Algorithm
Prim-mes (edge, c, k, tree min-cost)
real e(K, k), min cost;
L
2. _ integer NEAR(K), k, a, b, j, d, Tree (d: k ~ 1, 2);
3. mincost — 0
4. fori— 2tokdo
5 NEAR (a) 1
6. repeat
1. NEAR (d) —0
8, foras1tok-1do
9. Let b be an index such that NEAR (b) = 0 and C(b, NEAR(b)) is minimum.
10. (Tree (a, 1), Tree (a, 2)) — (b, NEAR (b))
IL mincost = mincost + e( b, NEAR(b))
12 NEAR (b) +0
B. forj 1 tok do
va if NEAR (j) #0 and cj, NEARG)) > e(j,b) then
_ 1S. NEAR () +b
16. endif ‘
1. repeat
18. repeat
19. iftmincost 2 ¢ then
20. * print (“no spanning tree”)
21. endif
22, end Prim-mes
‘Time Complexity 7
tec for lop’executesn~ 1 times and the inner fr loop executes tmes, so
In the above algorithm, the oute
‘ime complexity of PRIM’s algorithm is,
Ton = (n= 100
Toy =n? =n
wk ODS eal G0
rithm with an exampl
ao, e
Explain the Kruskal’s alg ait
gorithm with an illustrative examp!
1's a
in the Kruskal ‘Aug /Sop.-21(R18), Q7(b)
Write and explait
(oR)
Write Kruskal’s algorithm yrs)
First Paragrap" i.
Doe.-17(R18), Osa)
(Refer Only Topics: ound guilty LIABLE to face LEGAL proceedings
@ scanned with OKEN Scanner