Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
14 views13 pages

Daa PR06 123

Uploaded by

gilfoylesatanist
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views13 pages

Daa PR06 123

Uploaded by

gilfoylesatanist
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Bharati Vidyapeeth (Deemed To Be University)

College of Engineering, Pune

DEPARTMENT OF COMPUTER ENGINEERING

DESIGN AND ANALYSIS OF ALGORITHMS

PRACTICAL NO- 6

NAME: Sashank Yadav


PRN NO: 1814110120
ROLL NO: 123
CLASS: B.Tech SEM 6 DIV 2
Aim: Dynamic programming I: Design, Implement and analyze shortest path
algorithm.

Objective: To find shortest path between nodes with dynamic programming.

Theory:
Dynamic programming:

Dynamic programming (usually referred to as DP) is a very powerful technique


to solve a particular class of problems. It demands very elegant formulation of
the approach and simple thinking and the coding part is very easy. The idea is
very simple, If you have solved a problem with the given input, then save the
result for future reference, so as to avoid solving the same problem again..
Shortly 'Remember your Past' :) . If the given problem can be broken up in to
smaller sub-problems and these smaller sub problems are in turn divided in to
still-smaller ones, and in this process, if you observe some over-lapping sub
problems, then its a big hint for DP. Also, the optimal solutions to the sub
problems contribute to the optimal solution of the given problem ( referred to as
the Optimal Substructure Property ).

There are two ways of doing this.

1.) Top-Down : Start solving the given problem by breaking it down. If you see
that the problem has been solved already, then just return the saved answer. If it
has not been solved, solve it and save the answer. This is usually easy to think
of and very intuitive. This is referred to as Memorization.

2.) Bottom-Up: Analyze the problem and see the order in which the sub-
problems are solved and start solving from the trivial sub problem, up towards
the given problem. In this process, it is guaranteed that the sub problems are
solved before solving the problem. This is referred to as Dynamic Programming.
Types of Shortest Path Algorithms

The shortest path problem is about finding a path between 2 vertices in a graph
such that the total sum of the edges weights is minimum.

There are two main types of shortest path algorithms, single-source and all-
pairs. Both types have algorithms that perform best in their own way. All-pairs
algorithms take longer to run because of the added complexity. All shortest path
algorithms return values that can be used to find the shortest path, even if those
return values vary in type or form from algorithm to algorithm.

Single-source
Single-source shortest path algorithms operate under the following principle:

Definition:

Given a graph G, with vertices V, edges E with weight function w(u,v)=wu,v and
a single source vertex, s, return the shortest paths from to all other vertices in V.

If the goal of the algorithm is to find the shortest path between only two given
vertices, s and t, then the algorithm can simply be stopped when that shortest
path is found. Because there is no way to decide which vertices to "finish" first,
all algorithms that solve for the shortest path between two given vertices have
the same worst-case asymptotic complexity as single-source shortest path
algorithms.

This paradigm also works for the single-destination shortest path problem. By
reversing all of the edges in a graph, the single-destination problem can be
reduced to the single-source problem. So, given a destination vertex, t, this
algorithm will find the shortest paths starting at all other vertices and ending at t.

All-pairs
All-pairs shortest path algorithms follow this definition:
Definition:

Given a graph G, with vertices V, edges E with weight function w(u,v)=wu,v


return the shortest path from u to v for all (u,v) in V.

The most common algorithm for the all-pairs problem is the floyd-warshall
algorithm. This algorithm returns a matrix of values M, where each cell Mi,j is
the distance of the shortest path from vertex i to vertex j . Path reconstruction is
possible to find the actual path taken to achieve that shortest path, but it is not
part of the fundamental algorithm.

Bellman-Ford algorithm
The Bellman-Ford algorithm solves the single-source problem in the general
case, where edges can have negative weights and the graph is directed. If the
graph is undirected, it will have to modified by including two edges in each
direction to make it directed.

Bellman-Ford has the property that it can detect negative weight cycles
reachable from the source, which would mean that no shortest path exists. If a
negative weight cycle existed, a path could run infinitely on that cycle,
decreasing the path cost to -∞.

If there is no negative weight cycle, then Bellman-Ford returns the weight of the
shortest path along with the path itself.

Dijkstra's algorithm
Dijkstra's algorithm makes use of breadth-first search (which is not a single
source shortest path algorithm) to solve the single-source problem. It does place
one constraint on the graph: there can be no negative weight edges. However,
for this one constraint, Dijkstra greatly improves on the runtime of Bellman-Ford.

Dijkstra's algorithm is also sometimes used to solve the all-pairs shortest path
problem by simply running it on all vertices in V. Again, this requires all edge
weights to be positive.
Topological Sort
For graphs that are directed acyclic graphs (DAGs), a very useful tool emerges
for finding shortest paths. By performing a topological sort on the vertices in the
graph, the shortest path problem becomes solvable in linear time.

A topological sort is an ordering all of the vertices such that for each edge (u,v)
in E, comes before in the ordering. In a DAG, shortest paths are always well
defined because even if there are negative weight edges, there can be no
negative weight cycles.

Floyd-Warshall algorithm
The Floyd-Warshall algorithm solves the all-pairs shortest path problem. It uses
a dynamic programming approach to do so. Negative edge weight may be
present for Floyd-Warshall.

Floyd-Warshall takes advantage of the following observation: the shortest path


from A to C is either the shortest path from A to B plus the shortest path from B
to C or it's the shortest path from A to C that's already been found. This may
seem trivial, but it's what allows Floyd- Warshall to build shortest paths from
smaller shortest paths, in the classic dynamic programming way.

Johnson's Algorithm
While Floyd-Warshall works well for dense graphs (meaning many edges),
Johnson's algorithm works best for sparse graphs (meaning few edges). In
sparse graphs, Johnson's algorithm has a lower asymptotic running time
compared to Floyd-Warshall.

Johnson's algorithm takes advantage of the concept of reweighting, and it uses


Dijkstra's algorithm on many vertices to find the shortest path once it has
finished reweighting the edges.

Dijkstra’s algorithm finds a shortest path tree from a single source node, by
building a set of nodes that have minimum distance from the source.
C Implemetation of Dijkstra's Algorithm

#include<stdio.h>
#include<conio.h>
#define INFINITY 9999
#define MAX 10

void dijkstra(int G[MAX][MAX],int n,int startnode);

int main()
{
int G[MAX][MAX],i,j,n,u;
printf("Enter no. of vertices:");
scanf("%d",&n);
printf("\nEnter the adjacency matrix:\n");
for(i=0;i<n;i++)
for(j=0;j<n;j++)
scanf("%d",&G[i][j]);
printf("\nEnter the starting node:");
scanf("%d",&u);
dijkstra(G,n,u);
return 0;
}

void dijkstra(int G[MAX][MAX],int n,int startnode)


{

int cost[MAX][MAX],distance[MAX],pred[MAX];
int visited[MAX],count,mindistance,nextnode,i,j;
//pred[] stores the predecessor of each node
//count gives the number of nodes seen so far
//create the cost matrix
for(i=0;i<n;i++)
for(j=0;j<n;j++)
if(G[i][j]==0)
cost[i][j]=INFINITY;
else
cost[i][j]=G[i][j];
//initialize pred[],distance[] and visited[]
for(i=0;i<n;i++)
{
distance[i]=cost[startnode][i];
pred[i]=startnode;
visited[i]=0;

}
distance[startnode]=0;
visited[startnode]=1;
count=1;
while(count<n-1)
{
mindistance=INFINITY;
//nextnode gives the node at minimum distance
for(i=0;i<n;i++)
if(distance[i]<mindistance&&!visited[i])
{
mindistance=distance[i];
nextnode=i;
}
//check if a better path exists through nextnode
visited[nextnode]=1;
for(i=0;i<n;i++)
if(!visited[i])
if(mindistance+cost[nextnode][i]<distance[i])
{
distance[i]=mindistance+cost[nextnode][i];
pred[i]=nextnode;
}
count++;
}

//print the path and distance of each node


for(i=0;i<n;i++)
if(i!=startnode)
{
printf("\nDistance of node%d=%d",i,distance[i]);
printf("\nPath=%d",i);
j=i;
do
{
j=pred[j];
printf("<-%d",j);
}while(j!=startnode);
}
}

Output:
Conclusion :
In this practical we studied about Dijkstra’s Algorithm and its complexity. We
also implemented it using C program.

You might also like