In this chapter, we consider the problem of finding shortest paths between all pairs of vertices in a graph. This problem might arise in making a table of distances between all pairs of cities for a road atlas. As in Chapter 25, we are given a weighted, directed graph *G* = (*V*, *E*) with a weight function *w*: *E* **R** that maps edges to real-valued weights. We wish to find, for every pair of vertices *u*, *v* *V*, a shortest (least-weight) path from *u* to *v*, where the weight of a path is the sum of the weights of its constituent edges. We typically want the output in tabular form: the entry in *u*'s row and *v*'s column should be the weight of a shortest path from *u* to *v.*

To solve the all-pairs shortest-paths problem on an input adjacency matrix, we need to compute not only the shortest-path weights but also a * predecessor matrix* = (

V_{}_{,i }= {jV:NIL} {_{ij}i}

E_{}_{,i}= {(,_{ij}j):jV_{}_{,i}andNIL}_{ij}_{.}

PRINT-ALL-PAIRS-SHORTEST-PATH(,i,j)

1ifi=j

2thenprinti

3else if= NIL_{ij}

4thenprint "no path from"i"to"j"exists"

5elsePRINT-ALL-PAIRS-SHORTEST-PATH(,i,)_{ij}

6 printj

This section presents a dynamic-programming algorithm for the all-pairs shortest-paths problem on a directed graph *G* = (*V, E*). Each major loop of the dynamic program will invoke an operation that is very similar to matrix multiplication, so that the algorithm will look like repeated matrix multiplication. We shall start by developing a (*V*^{4})-time algorithm for the all-pairs shortest-paths problem and then improve its running time to (*V*^{3} lg *V*).

1. Characterize the structure of an optimal solution.

2. Recursively define the value of an optimal solution.

3. Compute the value of an optimal solution in a bottom-up fashion.

We start by characterizing the structure of an optimal solution. For the all-pairs shortest-paths problem on a graph *G* = (*V, E*), we have proved (Lemma 25.1 ) that all subpaths of a shortest path are shortest paths.Supppose that the graph is represented by an adjacency matrix *W* = (*w _{ij}*). Consider a shortest path

The latter equality follows since *w _{jj}* = 0 for all

EXTEND-SHORTEST-PATHS(D,W)

1nrows[D]

2 letD' = (d') be an_{ij}nnmatrix

3fori1ton

4do forj1 ton

5dod'_{ij}

6fork1ton

7dod'min(_{ij}d', d_{ij}+_{ik}w)_{kj}

8returnD'

Observe that if we make the substitutions

d^{(m-1)}a,

wb,

d^{(m)}c ,

min + ,

+

MATRIX-MULTIPLY(A,B)

1nrows[A]

2 letCbe annnmatrix

3fori1ton

4do forj1ton

5doc0_{ij}

6fork1ton

7doc_{ij}c+_{ij}a_{ik }b_{kj}

8returnC

D^{(1) }=D^{(0)}W=W,

D^{(2) }=D^{(1)}W=W^{2},

D^{(3) }=D^{(2) }W=W^{3},

D^{(n-1) }=D^{(n-2) }^{ }W=W-1^{n}.

SLOW-ALL-PAIRS-SHORTEST-PATHS(W)

1nrows[W]

2D^{(1)}W

3form2ton- 1

4doD^{(m)}EXTEND-SHORTEST-PATHS(D^{(m}-^{1)},W)

5returnD^{(n}-^{1)}

D^{(1) }=W,

D^{(2) }=W^{2}=WW,

D^{(4) }=W^{4}=W^{2 }W^{2}

D^{(8) }=W^{8}=W^{4 }W^{4},

D^{(2}^{lg(n-1)}^{) }=W^{2}^{lg(n-1)}^{ }=W^{2}^{lg(n-1)}^{-1}W^{2}^{lg(n-1)}^{-1}.

Since 2^{lg(n-1)} *n *- 1, the final product *D*^{(2}^{1g(n-1)}^{)} is equal to *D*^{(n-1)}.

The following procedure computes the above sequence of matrices by using this technique of * repeated squaring*.

FASTER-ALL-PAIRS-SHORTEST-PATHS(W)

1nrows[W]

2D^{(1)}W

3m1

4whilen- 1 >m

5doD^{(2m)}EXTEND-SHORTEST-PATHS(D^{(m)},D^{(m)})

6m2m

7returnD^{(m)}

Why do we require that *w _{ii}* = 0 for all 1

used in the shortest-paths algorithms correspond to in regular matrix multiplication?

The FASTER-ALL-PAIRS-SHORTEST-PATHS procedure, as written, requires us to store lg(*n* - 1) matrices, each with *n*^{2} elements, for a total space requirement of (*n*^{2 }lg *n*). Modify the procedure to require only (*n*^{2}) space by using only two *n* *n* matrices.

Modify FASTER-ALL-PAIRS-SHORTEST-PATHS to detect the presence of a negative-weight cycle.

In this section, we shall use a different dynamic-programming formulation to solve the all-pairs shortest-paths problem on a directed graph *G* = (*V, E*). The resulting algorithm, known as the * Floyd-Warshall algorithm*, runs in (

In the Floyd-Warshall algorithm, we use a different characterization of the structure of a shortest path than we used in the matrix-multiplication-based all-pairs algorithms. The algorithm considers the "intermediate" vertices of a shortest path, where an * intermediate* vertex of a simple path

Based on recurrence (26.5), the following bottom-up procedure can be used to compute the values in order of increasing values of *k*. Its input is an *n* *n* matrix *W* defined as in equation (26.1). The procedure returns the matrix *D*^{(n)} of shortest-path weights.

Given a directed graph *G* = (*V*, *E*) with vertex set *V* = {1, 2, . . . , *n*}, we may wish to find out whether there is a path in *G* from *i* to *j* for all vertex pairs *i*, *j* *V*. The * transitive closure* of

*E** = {(*i*, *j*) : there is a path from vertex *i* to vertex *j* in *G*} .

As in the Floyd-Warshall algorithm, we compute the matrices in order of increasing *k*.

TRANSITIVE-CLOSURE(G)

Run the Floyd-Warshall algorithm on the weighted, directed graph of Figure 26.2. Show the matrix *D*^{(k)} that results for each iteration of the outer loop.

FLOYD-WARSHALL'(W)

1nrows[W]

2DW

3fork1ton

4do fori1ton

5do forj1ton

6dmin(_{ij}d,_{ij}d+_{ik}d)_{kj}

7returnD

Suppose that we modify the way in which equality is handled in equation (26.7):

Is this alternative definition of the predecessor matrix correct?

How can the output of the Floyd-Warshall algorithm be used to detect the presence of a negative-weight cycle?

Give an *O*(*V E*)-time algorithm for computing the transitive closure of a directed graph *G* = (*V*, *E*).

Johnson's algorithm finds shortest paths between all pairs in *O*(*V*^{2} lg *V* + *V E*) time; it is thus asymptotically better than either repeated squaring of matrices or the Floyd-Warshall algorithm for sparse graphs. The algorithm either returns a matrix of shortest-path weights for all pairs or reports that the input graph contains a negative-weight cycle. Johnson's algorithm uses as subroutines both Dijkstra's algorithm and the Bellman-Ford algorithm, which are described in Chapter 25.

Johnson's algorithm uses the technique of * reweighting*, which works as follows. If all edge weights

2. For all edges (*u*, *v*), the new weight is nonnegative.

* Proof *We start by showing that

The third equality follows from the telescoping sum on the second line.

and thus *c* has negative weight using *w* if and only if it has negative weight using