Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
3 views23 pages

Lecture 16

The document discusses algorithms for finding shortest paths in graphs, including single-source and all-pairs shortest paths. It covers various algorithms such as Dijkstra's, Bellman-Ford, Floyd-Warshall, and Johnson's algorithm, detailing their time complexities and methodologies. Additionally, it introduces concepts like matrix multiplication in relation to shortest paths and graph reweighting techniques.

Uploaded by

benno0810
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views23 pages

Lecture 16

The document discusses algorithms for finding shortest paths in graphs, including single-source and all-pairs shortest paths. It covers various algorithms such as Dijkstra's, Bellman-Ford, Floyd-Warshall, and Johnson's algorithm, detailing their time complexities and methodologies. Additionally, it introduces concepts like matrix multiplication in relation to shortest paths and graph reweighting techniques.

Uploaded by

benno0810
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Introduction to Algorithms

6.046J/18.401J
LECTURE 16
Shortest Paths III
• All-pairs shortest paths
• Matrix-multiplication
algorithm
• Floyd-Warshall algorithm
• Johnson’s algorithm

Prof. Charles E. Leiserson


Shortest paths
Single-source shortest paths
• Nonnegative edge weights
 Dijkstra’s algorithm: O(E + V lg V)
• General
 Bellman-Ford algorithm: O(VE)
• DAG
 One pass of Bellman-Ford: O(V + E)

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.2


Shortest paths
Single-source shortest paths
• Nonnegative edge weights
 Dijkstra’s algorithm: O(E + V lg V)
• General
 Bellman-Ford: O(VE)
• DAG
 One pass of Bellman-Ford: O(V + E)
All-pairs shortest paths
• Nonnegative edge weights
 Dijkstra’s algorithm |V| times: O(VE + V 2 lg V)
• General
 Three algorithms today.
© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.3
All-pairs shortest paths
Input: Digraph G = (V, E), where V = {1, 2,
…, n}, with edge-weight function w : E → R.
Output: n × n matrix of shortest-path lengths
δ(i, j) for all i, j ∈ V.

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.4


All-pairs shortest paths
Input: Digraph G = (V, E), where V = {1, 2,
…, n}, with edge-weight function w : E → R.
Output: n × n matrix of shortest-path lengths
δ(i, j) for all i, j ∈ V.
IDEA:
• Run Bellman-Ford once from each vertex.
• Time = O(V 2E).
• Dense graph (n2 edges) ⇒ Θ(n 4) time in the
worst case.
Good first try!
© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.5
Dynamic programming
Consider the n × n adjacency matrix A = (aij)
of the digraph, and define
dij(m) = weight of a shortest path from
i to j that uses at most m edges.
Claim: We have
(0) 0 if i = j,
dij =
∞ if i ≠ j;
and for m = 1, 2, …, n – 1,
dij(m) = mink{dik(m–1) + akj }.

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.6


Proof of claim k’s
dij(m) = mink{dik(m–1) + akj }
dg es
– 1e
≤ m e s
1 e dg
≤m–
ii jj
≤m
–1 M
edg
es

≤ m – 1 edges

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.7


Proof of claim k’s
dij(m) = mink{dik(m–1) + akj }
dg es
– 1e
≤ m e s
1 e dg
≤m–
ii jj
≤m
–1 M
edg
es
Relaxation!
for k ← 1 to n
do if dij > dik + akj
then dij ← dik + akj ≤ m – 1 edges

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.8


Proof of claim k’s
dij(m) = mink{dik(m–1) + akj }
dg es
– 1e
≤ m e s
1 e dg
≤m–
ii jj
≤m
–1 M
edg
es
Relaxation!
for k ← 1 to n
do if dij > dik + akj
then dij ← dik + akj ≤ m – 1 edges

Note: No negative-weight cycles implies


δ(i, j) = dij (n–1) = dij (n) = dij (n+1) = L
© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.9
Matrix multiplication
Compute C = A · B, where C, A, and B are n × n
matrices: n
cij = ∑ aik bkj .
k =1
Time = Θ(n3) using the standard algorithm.

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.10


Matrix multiplication
Compute C = A · B, where C, A, and B are n × n
matrices: n
cij = ∑ aik bkj .
k =1
Time = Θ(n3) using the standard algorithm.
What if we map “+” → “min” and “·” → “+”?

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.11


Matrix multiplication
Compute C = A · B, where C, A, and B are n × n
matrices: n
cij = ∑ aik bkj .
k =1
Time = Θ(n3) using the standard algorithm.
What if we map “+” → “min” and “·” → “+”?
cij = mink {aik + bkj}.
Thus, D(m) = D(m–1) “×” A.
 0 ∞ ∞ ∞
∞ 0 ∞ ∞
Identity matrix = I = ∞ ∞ 0 ∞ = D0 = (dij(0)).
∞ ∞ ∞ 0 
 
© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.12
Matrix multiplication
(continued)
The (min, +) multiplication is associative, and
with the real numbers, it forms an algebraic
structure called a closed semiring.
Consequently, we can compute
D(1) = D(0) · A = A1
D(2) = D(1) · A = A2
M M
D(n–1) = D(n–2) · A = An–1 ,
yielding D(n–1) = (δ(i, j)).
Time = Θ(n·n3) = Θ(n4). No better than n × B-F.
© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.13
Improved matrix
multiplication algorithm
Repeated squaring: A2k = Ak × Ak.
2 4 2 lg(n–1)
Compute A , A , …, A .
O(lg n) squarings
Note: An–1 = An = An+1 = L.
Time = Θ(n3 lg n).
To detect negative-weight cycles, check the
diagonal for negative values in O(n) additional
time.
© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.14
Floyd-Warshall algorithm
Also dynamic programming, but faster!

Define cij(k) = weight of a shortest path from i


to j with intermediate vertices
belonging to the set {1, 2, …, k}.

ii ≤≤ kk ≤≤ kk ≤≤ kk ≤≤ kk jj

Thus, δ(i, j) = cij(n). Also, cij(0) = aij .

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.15


Floyd-Warshall recurrence
cij(k) = min {cij(k–1), cik(k–1) + ckj(k–1)}

(k–1)
k
cik ckj(k–1)

ii jj
cij(k–1)
intermediate vertices in {1, 2, …, k}

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.16


Pseudocode for Floyd-
Warshall
for k ← 1 to n
do for i ← 1 to n
do for j ← 1 to n
do if cij > cik + ckj
then cij ← cik + ckj relaxation

Notes:
• Okay to omit superscripts, since extra relaxations
can’t hurt.
• Runs in Θ(n3) time.
• Simple to code.
• Efficient in practice.
© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.17
Transitive closure of a
directed graph
1 if there exists a path from i to j,
Compute tij =
0 otherwise.
IDEA: Use Floyd-Warshall, but with (∨, ∧) instead
of (min, +):
tij(k) = tij(k–1) ∨ (tik(k–1) ∧ tkj(k–1)).
Time = Θ(n3).

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.18


Graph reweighting
Theorem. Given a function h : V → R, reweight each
edge (u, v) ∈ E by wh(u, v) = w(u, v) + h(u) – h(v).
Then, for any two vertices, all paths between them are
reweighted by the same amount.

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.19


Graph reweighting
Theorem. Given a function h : V → R, reweight each
edge (u, v) ∈ E by wh(u, v) = w(u, v) + h(u) – h(v).
Then, for any two vertices, all paths between them are
reweighted by the same amount.
Proof. Let p = v1 → v2 → L → vk be a path in G. We
have k −1
wh ( p ) = ∑ wh ( vi ,vi+1 )
i =1
k −1
= ∑ ( w( vi ,vi+1 )+ h ( vi )− h ( vi+1 ) )
i =1
k −1
= ∑ w( vi ,vi+1 ) + h ( v1 ) − h ( vk ) Same
i =1 amount!
= w ( p ) + h ( v1 ) − h ( v k ) .
© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.20
Shortest paths in reweighted
graphs
Corollary. δh(u, v) = δ(u, v) + h(u) – h(v).

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.21


Shortest paths in reweighted
graphs
Corollary. δh(u, v) = δ(u, v) + h(u) – h(v).

IDEA: Find a function h : V → R such that


wh(u, v) ≥ 0 for all (u, v) ∈ E. Then, run
Dijkstra’s algorithm from each vertex on the
reweighted graph.
NOTE: wh(u, v) ≥ 0 iff h(v) – h(u) ≤ w(u, v).

© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.22


Johnson’s algorithm
1. Find a function h : V → R such that wh(u, v) ≥ 0 for
all (u, v) ∈ E by using Bellman-Ford to solve the
difference constraints h(v) – h(u) ≤ w(u, v), or
determine that a negative-weight cycle exists.
• Time = O(V E).
2. Run Dijkstra’s algorithm using wh from each vertex
u ∈ V to compute δh(u, v) for all v ∈ V.
• Time = O(V E + V 2 lg V).
3. For each (u, v) ∈ V × V, compute
δ(u, v) = δh(u, v) – h(u) + h(v) .
• Time = O(V 2).
Total time = O(V E + V 2 lg V).
© 2001–4 by Charles E. Leiserson Introduction to Algorithms November 8, 2004 L16.23

You might also like