The Matrix Tree Theorem
The Matrix Tree Theorem
(a) G is a tree.
(d) There is a unique path (= walk with no repeated vertices) between any
two vertices.
1
A spanning subgraph of a graph G is a graph H with the same vertex set
as G, and such that every edge of H is an edge of G. If G has q edges, then
the number of spanning subgraphs of G is equal to 2q , since we can choose
any subset of the edges of G to be the set of edges of H. (Note that multiple
edges between the same two vertices are regarded as distinguishable.) A
spanning subgraph which is a tree is called a spanning tree. Clearly G has a
spanning tree if and only if it is connected [why?]. An important invariant
of a graph G is its number of spanning trees, called the complexity of G and
denoted κ(G).
r b r
@
@
a @e c
@
@
r @r
d
Then G has eight spanning trees, namely, abc, abd, acd, bcd, abe, ace, bde, and
cde (where, e.g., abc denotes the spanning subgraph with edge set {a, b, c}).
2
det(A) det(B) (where det denotes determinant)1. We want to extend this
formula to the case where A and B are rectangular matrices whose product
is a square matrix (so that det(AB) is defined). In other words, A will be an
m × n matrix and B an n × m matrix, for some m, n ≥ 1.
1 2 3 4 5
A = 6 7 8 9 10
11 12 13 14 15
and S = {2, 3, 5}, then
2 3 5
A[S] = 7 8 10 .
12 13 15
1
In the “additional material” handout (Theorem 2.4) there is a more general determi-
nantal formula without the use of the Binet-Cauchy theorem. However, the use of the
Binet-Cauchy theorem does afford some additional algebraic insight.
3
where S ranges over all m-element subsets of {1, 2, . . . , n}.
Before proceeding to the proof, let us give an example. We write |aij | for
the determinant of the matrix (aij ). Suppose
c1 d 1
a1 a2 a3
A= , B = c2 d 2 .
b1 b2 b3
c3 d 3
Then
a1 a2 c d a a c d a a c d
det(AB ) = · 1 1 + 1 3 · 1 1 + 2 3 · 2 2 .
b1 b2 c2 d 2 b1 b3 c3 d 3 b2 b3 c3 d 3
Take the determinant of both sides of (2). The first matrix on the left-hand
side is an upper triangular matrix with 1’s on the main diagonal. Hence its
4
determinant is one. Since the determinant of a product of square matrices is
the product of the determinants of the factors, we get
A Omm Omn AB
= . (3 )
−In B −In B
It is straightforward but somewhat tedious to verify that all the signs are +;
we omit the details. This completes the proof. 2
5
A is a real symmetric matrix (and hence has real eigenvalues) whose trace
is the number of loops in G.
We now define two matrices related to A(G). Assume for simplicity that
G has no loops. (This assumption is harmless since loops have no effect on
κ(G).)
(b) The laplacian matrix L(G) of G is the p × p matrix whose (i, j)-entry
Lij is given by
−mij , if i 6= j and there are mij edges between vi and vj
Lij =
deg(vi ), if i = j,
where deg(vi ) is the number of edges incident to vi . (Thus L(G) is symmetric
and does not depend on the orientation o.)
Note that every column of M (G) contains one 1, one −1, and q − 2
0’s; and hence the sum of the entries in each column is 0. Thus all the
rows sum to the 0 vector, a linear dependence relation which shows that
rank(M (G)) < p. Two further properties of M (G) and L(G) are given by
the following lemma.
6
Hence if G (or A(G)) has eigenvalues λ1 , . . . , λp , then L(G) has eigenvalues
d − λ1 , . . . , d − λp .
(b) Clear by (a), since the diagonal elements of M M t are all equal to d.
2
Now assume that G is connected, and let M 0 (G) be M (G) with its last
row removed. Thus M 0 (G) has p − 1 rows and q columns. Note that the
number of rows is equal to the number of edges in a spanning tree of G. We
call M 0 (G) the reduced incidence matrix of G. The next result tells us the
determinants (up to sign) of all (p − 1) × (p − 1) submatrices N of M 0 . Such
submatrices are obtained by choosing a set S of p − 1 edges of G, and taking
all columns of M 0 indexed by the edges in S. Thus this submatrix is just
M 0 [S].
Proof. If S is not the set of edges of a spanning tree, then some subset
R of S forms the edges of a cycle C in G. Consider the submatrix M 0 [R]
of M 0 [S] obtained by taking the columns indexed by edges in R. Suppose
that the cycle C defined by R has edges f1 , . . . , fs in that order. Multiply
the column of M 0 [R] indexed by fj by 1 if in going around C we traverse
7
fi in the direction of its arrow; otherwise multiply the column by −1. These
column multiplications will multiply the determinant of M 0 [R] by ±1. It is
easy to see (check a few small examples to convince yourself) that every row
of this modified M 0 [R] has the sum of its elements equal to 0. Hence the
sum of all the columns is 0. Thus in M 0 [S] we have a set of columns for
which a linear combination with coefficients ±1 is 0 (the column vector of all
0’s). Hence the columns of M 0 [S] are linearly dependent, so det M 0 [S] = 0,
as claimed.
We have now assembled all the ingredients for the main result of this
section (due originally to Borchardt). Recall that κ(G) denotes the number
of spanning trees of G.
8
At [S] = A[S]t , equation (4) becomes
X
det(L0 ) = (det M 0 [S])2 . (5)
S
The operation of removing a row and column from L(G) may seem some-
what contrived. We would prefer a description of κ(G) directly in terms of
L(G). Such a description will follow from the next lemma.
1.9 Lemma. Let M be a p×p matrix with real entries such that the sum
of the entries in every row and column is 0. Let M0 be the matrix obtained
from M by removing the last row and last column (or more generally, any row
and any column). Then the coefficient of x in the characteristic polynomial
det(M − xI) of M is equal to −p · det(M0 ). (Moreover, the constant term of
det(M − xI) is 0.)
For simplicity we prove the rest of the lemma only for removing the last
row and column, though the proof works just as well for any row and column.
Add all the rows of M − xI except the last row to the last row. This doesn’t
effect the determinant, and will change the entries of the last row all to −x
(since the rows of M sum to 0). Factor out −x from the last row, yielding a
matrix N(x) satisfying det(M −xI) = −x det(N(x)). Hence the coefficient of
x in det(M − xI) is given by − det(N(0)). Now add all the columns of N(0)
except the last column to the last column. This does not effect det(N (0)).
Because the columns of M sum to 0, the last column of N(0) becomes the
column vector [0, 0, . . . , 0, p]t . Expanding the determinant by the last column
shows that det(N(0)) = p · det(M0 ), and the proof follows. 2
9
0. Then
1
κ(G) = µ1 µ2 · · · µp−1 .
p
(b) Suppose that G is also regular of degree d, and that the eigenvalues of
A(G) are λ1 , . . . , λp−1, λp , with λp = d. Then
1
κ(G) = (d − λ1 )(d − λ2 ) · · · (d − λp−1 ).
p
Proof. (a) We have
A(Kp ) + I = J,
10
if asked to do so. There are many other proofs known, including elegant
combinatorial arguments due to Prüfer, Joyal, Pitman, and others.
1.12 Example. The n-cube Cn is the graph with vertex set Zn2 (the set
of all n-tuples of 0’s and 1’s), and two vertices u and v are connected by an
edge if they differ in exactly one component. Now Cn is regular of degree n,
n
and it can be shown that its eigenvalues are n − 2i with multiplicity i for
0 ≤ i ≤ n. (See the solution to Exercise 10.18(d) of the text.) Hence from
Corollary 1.10(b) there follows the amazing result
n
1 Y
(2i)( i )
n
κ(Cn ) = n
2 i=1
n
i( i ) .
n
2n −n−1
Y
= 2
i=1
11
18.314 (Fall 2011)
Problems on the Matrix-Tree Theorem
r r r r r
r r r r r r
r r r r r
r r r
Let G′ be the dual graph G∗ with the “outside” vertex deleted. (The
vertices of G∗ are the regions of G. For each edge e of G, say with
regions R and R′ on the two sides of e, there is an edge of G∗ between
R and R′ .) For the above example, G′ is given by
1
r r r r
r r r
r r r r
r r
2
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.