Spectral Graph
Spectral Graph
Dissertation submitted to the University of Kerala in partial fulfillment of the award of the
degree of
MASTER OF SCIENCE
IN
MATHEMATICS
Submitted by
ANAND S S
REG NO: 83721612006
DR. SREEKUMAR K G
Department of Mathematics
University of Kerala
2021-2023
CERTIFICATE
DR. SREEKUMAR K G
Postdoctoral Fellow
Department of Mathematics
University of Kerala
Countersigned by
DECLARATION
I hereby declare that the work which is being presented in the project entitled A STUDY
ON SPECTRAL PROPERTIES OF GRAPHS in partial fulfillment of the requirement for
the award of the degree of Master of Science in Mathematics, submitted in the Department of
Mathematics, University of Kerala, is a review work carried out by me under the supervision of
DR.SREEKUMAR K G. The matter embodied in this project has not been submitted by me for
the award of any other degree.
Karyavattom
August 2023
ANAND S S
Reg No: 83721612006
Department of Mathematics
University of Kerala
ACKNOWLEDGEMENT
Kariyavattom
ANAND S S
Reg No: 83721612006
Department of Mathematics
University of Kerala
Contents
1 Introduction 6
2 Preliminaries 7
2.1 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
6 CONCLUSION 52
5
Chapter 1
Introduction
In Mathematics, spectral graph theory is the study of a graph in relationship to the characteristic
polynomial, eigenvalues and eigenvectors of matrices associated with the graph, such as its adja-
cency matrix or laplacian matrix. The most common matrix that is studied within spectral theory
is the adjacency matrix. Spectral graph theory emerged in the 1950s and 1960s. Besides graph
theoretic research on the relationship between structural and spectral properties of graphs, another
major source was research in quantum chemistry, but the connections between these two lines of
work were not discovered until much later. “Spectra of Graphs” written by Cvetković, Doob, and
Sachs in the year 1980 summarised nearly all research to date in the area. Spectral graph theory is
also concerned with graph parameters that are defined via multiplicities of eigenvalues of matrices
associated to the graph. For this dissertation we mainly uses the definitions and results from the
books “Graphs and Matrices” written by Ravindra B.Bapat and “A Textbook of Graph Theory”
written by R. Balakrishnan, K. Ranganathan.
In the second chapter, we discuss about the basic concepts in Graph theory and Linear algebra
that is useful to spectral graph theory. In the next chapter, we discuss about the adjacency matrix,
eigenvalues of some fundamental graph then their corresponding spectrum and determinant of the
adjacency matrix. Meanwhile the fifth chapter consists of the application of this spectrum of
adjacency matrix such as bounds, chromatic number, energy of a graph and the eigenvalue of
regular graphs. And the last chapter deals with laplacian matrices and their eigenvalues and the
application of spectrum of graph in various sciences.
6
Chapter 2
Preliminaries
In this chapter we collect the basic definition which are needed for subsequent chapters.
A graph is a pair G=(V,E), where V is a set whose elements are called vertices and E is a set of
paired vertices, whose elements are edges. Each edge has a set of one or two vertices associated
to it, which are called its endpoints.
Definition 2.1.2
A self-loop is an edge that joins a single endpoint to itself. A multi-edge is a collection of two
or more edges having identical endpoints. A simple graph has neither self-loops nor multi-edges.
Definition 2.1.3
The number of edges incident on a vertex vi , with self loops counted twice is called the degree
d(vi ) of vertex vi . A set of edges in a graph G is independent if no two edges in the set are
adjacent.
7
Definition 2.1.4
A walk in a graph is a sequence of alternating vertices and edges that starts and ends at a
vertex. A walk of length n is a walk with n edges. Consecutive vertices in the sequence must
be connected by an edge in the graph.
Definition 2.1.5
A closed walk is a sequence of alternating vertices and edges that starts and ends at the same
vertex. A cycle is a closed walk which contains any edge at most one time
Definition 2.1.6
Definition 2.1.7
A complete graph Kn is a connected graph on n vertices where all vertices are of degree n-1.
In other words, there is an edge between a vertex and every other vertex. A complete graph has
n(n-1)
2
edges.
Definition 2.1.8
A cycle graph is a connected graph on n vertices where all vertices are of degree 2. A cycle
graph can be created from a path graph by connecting the two pendant vertices in the path by
an edge. A cycle has an equal number of vertices and edges.
Definition 2.1.9
A bipartite graph is a graph on n vertices where the vertices are partitioned into two
independent sets, V1 and V2 such that there are no edges between vertices in the same set.
Definition 2.1.10
A complete bipartite graph Kp,q is a bipartite graph in which there is an edge between every
in V1 and every vertex in V2 .
8
Definition 2.1.11
A graph in which all vertices are of equal degree is called a regular graph. A graph G is
r-regular if deg v=r for all v ∈ V.
Definition 2.1.12
A subgraph H of a graph G whose vertex set and edge set are subset of the graph G.
Definition 2.1.13
Let G=(V,E) be a graph. Let S be any subset of vertices of G. Then the induced subgraph G[S]
is the graph whose vertex set is S and whose edge set consists of all of the edges in E that have
both endpoints in S.
An m × n matrix consists of mn real numbers arranged in m rows and n columns. The entry in
row i and column j of the matrix A is denoted by aij .
Definition 2.2.2
A diagonal matrixis a square matrix A such that aij =0, i ̸= j. We denote the diagonal matrix
λ 0 ··· 0
1
0 λ2 · · · 0
.. .. . . . by diag(λ1 , ..., λn ). When λi =1 for all i, this matrix reduces to the identity
. . . ..
0 0 · · · λn
matrix of order n, which we denote by In . The matrix A is upper triangular if aij =0, i > j. The
transpose of an upper triangular matrix is lower triangular.
9
Definition 2.2.3
Let A be a square matrix of order n. The entries a11 , ..., ann are said to constitute the (main)
diagonal of A. The trace of A is defined as
It follows from this definition that if A, B are matrices such that both AB and BA are defined,
then
traceAB = traceBA
Definition 2.2.4
where the summation is over all permutations π(1), ..., π(n) of 1,...,n and sgn(π) is 1 or -1
according as π is even or odd.
Definition 2.2.5
Let A be an m ×n matrix. The subspace of Rm spanned by the column vectors of A is called the
column space or the column span of A. Similarly the subspace Rn spanned by the row vectors of
A is called the row space of A.
According to the fundamental theorem of linear algebra, the dimension of the column space of a
matrix equals the dimension of the row space, and the common value is called the rank of the
matrix. We denote the rank of the matrix by rank A.
′
For any matrix A, rank A= rank A . If A and B are matrices of the same order, then
rank(A+B)≤ rank A+ rank B. If A and B are matrices such that AB is defined, then rank AB≤
min{rank A, rank B}.
Let A be an m × n matrix. The set of all vectors x∈ Rn such that Ax=0 is easily seen to be a
subspace of Rn . This subspace is called the null space of A,and we denote N (A). The dimension
of N (A) is called the nullity of A. Let A be an m × n matrix. Then the nullity of A equals
n-rank A.
10
Definition 2.2.6
′
Vectors x,y in Rn are said to be orthogonal, or perpendicular, if x y=0. A set of vectors
{x1 , ..., xm } in Rn is said to form an orthonormal basis for the vector space S if the set is a basis
′
for S, and furthermore xi xj is 0 if i̸=j, and 1 if i=j. The n × n matrix P is said to be orthogonal
′ ′
if P P = P P = I.
Definition 2.2.7
A principal minor of a square matrix is one where the indices of the deleted rows are same as the
indices of the delelted columns.
Remark 2.2.8
Let G be a connected graph with vertices {v1 , v2 , ..., vn } and let A be the adjacency matrix of
G.Then,
• A is symmetric.
• Sum of the 3×3 principal minors of A equals to twice the number of triangles in the graph.
Definition 2.2.9
Ax = λx, x ̸= 0
11
Chapter 3
In this chapter,we discuss about the properties of graphs from the knowledge of their eigenvalues.
The set of eigenvalues of a graph G is known as the spectrum of G and denoted by Sp(G). We
compute the spectra of some well-known families of graphs- the family of complete graphs, the
family of cycles etc.
Let G be a graph of order n with vertex set V={v1 , v2 , ..., vn }. The adjacency matrix of G (with
respect to this labeling of V) is the n by n matrix A = (aij ), where
1 if vi is adjacent to vj in G
aij =
0 otherwise
Example 3.1.2
12
1 4 5
2 3
0 1 0 1 0
1 0 1 0 0
A(G) = 0
1 0 1 1
1 0 1 0 1
0 0 1 1 0
Clearly A is a symmetric matrix with zeros on the diagonal. For i ̸= j, the principal submatrix of
A
formed
by the rows and the columns i,j is the zero matrix if i ≁ j and otherwise it equals
0 1
. The determinant of this matrix is -1.
1 0
Definition 3.1.3
Let G be a connected graph with vertices {1, 2, ..., n}. The distance d(i,j) between the vertices i
and j is defined as the minimum length of an (ij)-path. We set d(i,i)=0. The maximum value of
d(i,j) is the diameter of G.
Note 3.1.4
We have Ak{i,j} represents the number of walks of length k from vertex vi to vertex vj . This
means, given vertices vi and vj with d(vi , vj )=m in G, a graph with adjacency matrix A, we have
Aki,j =0 for 0 ≤ k < m and Aki,j ̸= 0 otherwise.
Lemma 3.1.5
Let G be a connected graph with vertices {1,2,...,n} and let A be the adjacency matrix of G. If i,j
are vertices of G with d(i,j)=m, then the matrices I,A,...,Am are linearly independent.
Proof. We may assume i ̸= j. There is no (ij)-path of length less than m. Thus,the (i,j)-element
of I,A,...,Am−1 is zero, whereas the (i,j)-element of Am is nonzero.Hence, the result follows. □
13
Definition 3.1.6
A minimal polynomial of a graph G is the monic polynomial of smallest degree, such that
q(G) = Π(λ − λk ), where λ′k s are distinct eigenvalues. For example, if f(x)=x3 (x − 5)2 (x + 4)4 ,
the minimal polynomial for f(x) is x(x-5)(x+4).
Lemma 3.1.7
Let G be a connected graph with k distinct eigenvalues and let d be the diameter of G. Then k>d.
Proof. Let A be the adjacency matrix of G. By Lemma 3.1.5, the matrices I,A,...,Ad are linearly
independent. Thus, the degree of the minimal polynomial of A, which equals k, by Lemma 3.1.7,
k must exceed d. □
for any permutation matrix of P, hence ϕλ (G) is also independent of the labeling of V(G). The
general form of any characteristic polynomial is λn + c1 λn−1 + c2 λn−2 + ... + cn .
Theorem 3.2.2
14
The coefficients of the characteristic polynomial that coincide with matrix A of a graph G have
the following characteristics:
i) c1 =0
ii) -c2 is the number of edges of G
iii)-c3 is twice the number of of triangles in G
Proof. For each i ∈ {1, 2, ..., n}, the number (−1)i ci is the sum of those principal minors of A
which have i rows and columns. So we can argue as follows:
(i) Since the diagonal elements of A are all zero, c1 =0.
(ii) A principal
minor with two rows and columns, and which has a non-zero entry, must be of
0 1
the form . There is one such minor for each pair of adjacent vertices of the characteristic
1 0
polynomial, and each has a value of -1. Hence, (−1)2 c2 = − | E(G) |
(iii) Thereare essentially
threepossibilities
for non-trivial principal minors with three rows and
0 1 0 0 1 1 0 1 1
columns: 1 0 0, 1 0 0,1 0 1. Of these, the only non-zero one is the last one(whose
0 0 0 1 0 0 1 1 0
value is two). This principal minor corresponds to three mutually adjacent vertices in ϕλ , and so
we have the required description of c3 . □
Example 3.2.3
2 3
0 1 1
The adjacency matrix is given by A=1 0 1 Then the characteristic polynomial is
1 1 0
λ 0 0 0 1 1 λ −1 −1
ϕλ = det(λI − A) = det( 0 λ 0 − 1 0 1) = det −1 λ −1 = λ3 − 3λ − 2
0 0 λ 1 1 0 −1 −1 λ
15
Here
i) there is no λ2 , making c1 =0
ii) -c2 =3, The number of edges of G is 3.
iii) -c3 =2, The number of triangles in G is 1.
Theorem 3.2.4
For any positive integer n, the eigenvalues of Kn are n − 1 with multiplicity 1 and -1 with
multiplicity n − 1.
First consider Jn , the n × n matrix of all ones. It is a symmetric, rank 1 matrix, and hence it
has only one nonzero eigenvalue, which must equal the trace. Thus, the eigenvalues of Jn are n
with multiplicity 1 and 0 with multiplicity n-1. Since A(Kn )= Jn − In , the eigenvalues of A(Kn )
must be n − 1 with multiplicity 1 and -1 with multiplicity n − 1. □
Example 3.2.5
For K3 , eigenvalues are 2 with multiplicity 1 and -1 with multiplicity 2. i.e., eigenvalues are 2, -1,
-1.
For K4 , Eigenvalues are 3 with multiplicity 1 and -1 with multiplicity 3. i.e., 3, -1, -1, -1.
Theorem 3.2.6
√ √
For any positive integers p, q the eigenvalues of Kp,q are pq, - pq and 0 with multiplicity
p+q-2.
16
Where Jpq and Jqp are matrices of all ones of the appropriate size.Now
and hence A(Kp,q ) must have precisely two nonzero eigenvalues. These must be of the form
λ and − λ, since the trace of A(Kp,q ) is zero. As noted earlier, the sum of the 2 × 2 principal
minors of A(Kp,q ) is negative the number of edges, that is, -pq. This sum also equals the sum of
the products of the eigenvalues, taken two at a time, which is −λ2 . Thus, λ2 =pq and the
√ √
eigenvalues must be pq, - pq and 0 with multiplicity p+q-2. □
Definition 3.2.7
For a positive integer n ≥2, let Qn be the full cycle permutation matrix of order n. Thus,the
(i,i+1)-element of Qn is 1, i=1,2,· · · ,n-1, the(n,1)-element of Qn is 1, and the remaining elements
of Qn are zero.
Lemma 3.2.8
2πi
For n≥2, the eigenvalues of Qn are 1,ω, ω 2 , ..., ω n−1 , where ω = e n , is the primitive nth roots of
unity.
Clearly, the roots of this characteristic polynomial are the nth root of unity. □
For a positive integer n, Cn and Pn will denote the cycle and the path on n vertices, respectively.
Theorem 3.2.9
17
Hence,
0 1 0 0 ... 0 1
1 0 1 0 ... 0 0
A=
.. .. .. .. . . .. ..
. . . . . . .
1 0 0 0 ... 1 0
′
Note that A(Cn ) = Qn + Qn = Qn + Q−1
n is a polynomial in Qn . Thus the eigenvalues of A(Cn )
are obtained by evaluating the same polynomial at each of the eigenvalues of Qn . Thus by
Lemma 3.2.8, the eigenvalues of A(Cn ) are ω k + ω n−k , k=1,. . . ,n. Also
ω k + ω n−k = ω k + ω n ω −k
2πik 2πik
=e n + e− n
2πk
= 2cos ,
n
k=1,. . . ,n and hence the proof. □
Theorem 3.2.10
πk
For n≥1, the eigenvalues of Pn are 2cos n+1 ,k=1,...,n.
18
0 1 0 0 ··· 0 0 x x
1 1
1 0 1 0 · · · 0 0 x2 x2
0 1 0 1 · · · 0 0 x3 x3
.. ..
0 · · · 0 0 . = λ .
0 0 1
.. .. .. .. . . .. ..
.
. . . . . . x x
n−2 n−2
0 · · · 0 1 xn−1
0 0 0 xn−1
0 0 0 0 ··· 1 0 xn xn
x2 λx1
x1 + x3 λx2
x2 + x4 λx3
=
.. .. .
. .
xn−2 + xn λxn−1
xn−1 λxn
Now for
0 1 0 0 ··· 0 0 −xn −xn
−xn−1
1 0 1 0 ··· 0 0 −xn−1 −xn−1
−xn − xn−2
0 1 0 1 ··· 0 0 −xn−2 −x
n−2
−xn−1 − xn−3
.. ..
A(Pn )y = 0 0 ··· 0 = = λ . .
0 1 0 .
..
..
.. .. .. . . .. ..
.
. . . −x3 −x3
. . . .
−x − x
3 1
0 0 0 ··· −x2 −x2
0 0 1
−x2
0 0 0 0 ··· 1 0 −x1 −x1
Therefore y is an eigenvector of λ.
It may be verified that
(x1 , ..., xn , 0, −xn , ..., −x1 , 0)
and
(0, x1 , ..., xn , 0, −xn , ..., x1 )
are two linearly independent eigenvectors of A(C2n+2 ) for the same eigenvalue. We illustrate this
19
′
by an example. Suppose x=(x1 , x2 , x3 ) is an eigenvector of A(P3 ) for the eigenvalue λ. Then
0 1 0 x1 x
1
0 1 x2 = λ x2
1
0 1 0 x3 x3
We obtain an eigenvector of A(C8 ) for the same eigenvalue, since it may be verified that
0 1 0 0 0 0 0 1 x x
1 1
1 0 1 0 0 0 0 0 x2 x2
0 1 0 1 0 0 0 0 x3 x3
0 0 1 0 1 0 0 0 0 0
= λ
0 1 0 0 −x3 −x3
0 0 0 1
−x −x
0 0 0 0 1 0 1 0
2 2
0 1 0 1 −x1 −x1
0 0 0 0
1 0 0 0 0 0 1 0 0 0
Continuing with the proof, we have established that each eigenvalue of Pn must be an eigenvalue
2πk πk
of C2n+2 of multiplicity 2. By Theorem 3.2.9, the eigenvalue of C2n+2 are 2cos 2n+2
=2cos n+1 ,
k=1,...,2n+2. Of these, the eigenvalues that appear twice, in view of the periodicity of the cosine
πk
, are 2cos n+1 , k=1,...,n, which must be the eigenvalues of Pn . □
Let G be a graph of order n and A be the corresponding adjacency matrix. Then spectrum of A,
that is, the set of eigenvalues is real.Then spectrum of A is called the spectrum of G and denoted
by Sp(G).
Remark 3.3.2
20
Example 3.3.3
Example 3.3.4
Example 3.3.5
Example 3.3.6
For a complete bipartite graph Kp,q , then the spectrum of Kp,q is given by,
√ √
0 pq − pq
Sp(Kp,q ) =
p+q−2 1 1
For K3,2 , spectrum of K3,2 is given by,
√ √
0 6 − 6
Sp(K3,2 ) = .
3 1 1
21
3.4 Determinant
Definition 3.4.1
A linear subgraph of a graph G is a subgraph of G whose components are single edges or cycles.
Theorem 3.4.2(Harary)
Let A be the adjacency matrix of a simple graph G with vertex set V={v1 , ..., vn } . Then
X
detA = (−1)n−c1 (H)−c(H) 2c(H) ,
H
where the summation is over all spanning elementary subgraphs H of G, and c1 (H) and c(H)
denote, respectively, the number of edge components and the number of cycles in H.
Proof. Let G be of order n with V={v1 , ..., vn }, and A=(aij ). A typical term in the expansion of
det A is
X
sgn(π)a1π(1) a2π(2) ...anπ(n)
Since each cycle like v1 , v2 , ..., vt , v1 can be expressed as v1 , vt , vt−1 , ..., v2 , v1 . Thus each cycle can
be associated to a cyclic permutation in two ways, and each spanning subgraph gives rise to 2c(H)
terms in the summation.Thus the proof. □
22
Remark 3.4.3
where the summation is over all linear subgraph H of G,c(H) and e(H) denotes the number of
components in H which are cycles and even components, respectively.
Example 3.4.4
1 2
4 3
Then the linear subgraphs of G are given by H1 , H2 , H3 respectively in the folllowing graphs;
1 2 1 2 1 2
4 3 4 3 4 3
In H1 , c(H1 )=1,c1 (H1 )=0 and in H2 we get c(H2 )=0, c1 (H2 )=2 . For H3 , c(H3 )=0, c1 (H3 )=2.
= −2 + 1 + 1
=0
23
Theorem 3.4.5
Let G be a graph with vertices {1,2,...,n} and let A be the adjacency matrix of G. Let
Proof. Observe that ck is (−1)k times the sum of the principal minors of A of order k, k=1,2,...,n.
By Theorem 3.4.2,
X
ck = (−1)k (−1)k−c1 (H)−c(H) 2c(H) ,
where the summation is over all the elementary subgraphs H of G with k vertices. Hence,
X
ck = (−1)c1 (H)+c(H) 2c(H)
Corollary 3.4.6
Let G be a graph with vertices 1,2,...,n and let A be the adjacency matrix of G.Let
Proof. Since c3 =0, there are no triangles in G. Thus, any elementary subgraph of G with 5
vertices must only comprise of 5-cycle. It follows by Theorem 3.4.5 that if c5 =0 then there are no
5-cycles in G. Continuing this way we find that if c3 = c5 = ... = c2k−1 = 0 , then any elementary
subgraph of G with 2k+1 vertices must be a (2k+1)-cycle. Furthermore, by Theorem 3.4.5,
X
c2k+1 = (−1)c1 (H)+c(H) 2c(H)
where the summation is over all (2k+1)-cycles H in G. For any (2k+1)-cycle H, c1 (H) = 0 and
c(H) = 1. Therefore, c2k+1 is (-2) the number of (2k+1)-cycles in G. that complete the proof. □
24
Corollary 3.4.7
Using the notation of Corollary 3.4.6, if c2k+1 =0, k=0,1,..., then G is bipartite.
Proof. If c2k+1 =0, k=0, 1, 2,..., then by Corollary 3.4.6, G has no odd cycles and hence G must
be partite. □
Lemma 3.4.8
Theorem 3.4.9
Let G be a graph with vertices 1,2,...,n and let A be the adjacency matrix of G. Then the
following conditions are equivalent.
(i) G is bipartite;
(ii) if ϕλ (A) = λn + c1 λn−1 + ... + cn is the characteristic polynomial of A, then c2k+1 =0, k=0,1,...
(iii) the eigenvalues of A are symmetric with respect to the origin, i.e, if λ is an eigenvalue of A
with multiplicity k, then -λ is also an eigenvalue of A with multiplicity k.
25
Proof. (i) ⇒ (iii)
Assume that G is bipartite, then by lemma 3.4.8, if λ is an eigenvalue of A with multiplicity k,
then −λ is also an eigenvalue of A with multiplicity k.Therefore, the eigenvalues of A are
symmetric with respect to the origin.
(iii) ⇒ (ii)
Let λ1 , ..., λk , −λ1 , ..., −λk be the nonzero eigenvalues of A. Here λ1 , ..., λk are not necessary
distinct. Then 0 is an eigenvalue of A with multiplicity n-2k. The characteristic polynomial of A
equals λn−2k (λ2 − λ21 )...(λ2 − λ2k ). It follows that c2k+1 =0, k=0,1,..., and hence (ii) holds.
(ii) ⇒ (i)
Suppose that (ii) holds, then by corollary 3.4.7, G is bipartite. □
26
Chapter 4
In this chapter, we discuss applications of eigenvalues of graphs. We begin with an easy bound
for the largest eigenvalue of a graph and then the energy of a graph is a concept borrowed from
chemistry.
4.1 Bounds
Theorem 4.1.1
Let G be a graph with n vertices, m edges and let λ1 ≥ λ2 ≥ ... ≥ λn be the eigenvalues of G.
1
Then λ1 ≤ ( 2m(n−1)
n
)2 .
Pn Pn
Proof. We have i=1 λi =0 and i=1 λ2i =2m. Therefore,
n
X
λi = 0
i=1
Xn
⇒ λ1 + λi = 0
i=2
n
X
⇒ λ1 = − λi
i=2
Pn
Hence λ1 ≤ i=2 | λi |. By Cauchy-Schwarz inequality,
n n
X 1 X λ21
2m − λ21 = λ2i ≥ ( | λi |)2 ≥
i=2
n − 1 i=2 n−1
27
Hence,
1 n
2m ≥ λ21 (1 + ) = λ21 ( )
n−1 n−1
1
and therefore λ21 ≤ ( 2m(n−1)
n
).Then λ1 ≤ ( 2m(n−1)
n
)2 . □
Remark 4.1.2
A related interlacing result as follows. Let A and B be symmetric n × n matrices and let
A = B + xx′ for some vector x. If λ1 ≥ λ2 ≥ ... ≥ λn and µ1 ≥ µ2 ≥ ... ≥ µn are the eigenvalues
of A and B respectively, then
λ1 ≥ µ1 ≥ λ2 ≥ ... ≥ λn ≥ µn
Let A be a symmetric n × n matrix with eigenvalues λ1 (A) ≥ λ2 (A) ≥ ... ≥ λn (A), arranged in
1
nonincreasing order. Let ∥x∥ denote the usual Euclidean norm, ( ni=1 x2i ) 2 . The following
P
′
λ1 (A) = max∥x∥=1 {x Ax}
′
λn (A) = min∥x∥=1 {x Ax}
Note 4.1.3
Let G be a graph with n vertices and with eigenvalues λ1 ≥ λ2 ≥ ... ≥ λn . We denote λ1 and λn
by λ1 (G) and λn (G), respectively.Similarly, λ1 (B) and λn (B) will denote the largest and smallest
eigenvalues of the symmetric matrix B.
Lemma 4.1.4
28
Let G be a graph with n vertices and let H be an induced subgraph of G with p vertices. Then
λ1 (G) ≥ λ1 (H) and λn (G) ≤ λp (H).
Proof.Assume that H be an induced subgraph of G. Let λ1 (G) ≥ λ2 (G) ≥ ... ≥ λn (G) be the
eigenvalues of G and λ1 (H) ≥ λ2 (H) ≥ ... ≥ λp (H) be the eigenvalues of H. Note that A(H) is a
principal submatrix of A(G).By using Remark 4.1.2,
Note 4.1.5
For a graph G, we denote by ∆(G) and δ(G), the maximum and the minimum of the vertex
degrees of G,respectively.
Theorem 4.1.6
29
where m is the number of edges in G.
If d1 , d2 , ..., dn are the vertex degrees of G, then 2m = d1 + ... + dn ≥ nδ(G) and
2m nδ
λ1 (G) ≥ = = δ(G)
n n
i.e, δ(G) ≤ λ1 (G) ≤ ∆(G). □
The chromatic number χ(G) of a graph G is the minimum number of colours required to colour
the vertices so that adjacent vertices get distinct colours.
Proof.The result is true for χ(G) = 1. Let χ(G) = k ≥ 2. Let H be an induced subgraph of G
such that χ(H) = k and furthermore, suppose that H is minimal with respect to the number of
vertices. That is to say, χ(H \ {i}) < k for any vertex i of H.
We claim that δ(H) ≥ k − 1. Indeed, suppose i is a vertex of H with degree less than k − 1. Since
χ(H \ {i}) < k, we may properly colour vertices of H \ {i} with k-1 colours. Since the degree of i
is less than k-1, we may extend the colouring to a proper (k-1)-colouring of H, a contradiction.
Hence the degree of each vertex of H is at least k-1 and therefore δ(H) ≥ k − 1. Lemma 4.1.4 and
Theorem 4.1.6,we have,
λ1 (G) ≥ λ1 (H) ≥ δ(H) ≥ k − 1
Theorem 4.2.3
λ1 (B + C) ≤ λ1 (B) + λ1 (C)
30
′
λ1 (B + C) = max∥x∥=1 {x (B + C)x}
′ ′
≤ max∥x∥=1 {x Bx} + max∥x∥=1 {x Cx}
= λ1 (B) + λ1 (C)
Theorem 4.2.4
Now,
′
λ1 (B) = λ1 (CC )
′
= λ1 (C C)
′ ′
= λ1 (C1 C1 + C2 C2 )
′ ′
≤ λ1 (C1 C1 ) + λ1 (C2 C2 )
′ ′
= λ1 (C1 C1 ) + λ1 (C2 C2 )
= λ1 (B11 ) + λ1 (B22 )
Theorem 4.2.5
31
Let B be an n × n positive semidefinite matrix and suppose B is partitioned as
B11 B12
B= ,
B21 B22
Proof. We have
B11 − λn (B)Ip B12
B − λn (B)In =
B21 B22 − λn (B)In−p
Since B − λn (B)In is positive semidefinite,by Theorem 4.2.4, we get
and hence
λ1 (B) + λn (B) ≤ λ1 (B11 ) + λ1 (B22 )
Theorem 4.2.6
Proof. We prove the result by induction on k. When k=2 the result follows by Theorem 4.2.5, So
assume the result to be true for k-1. Let C be the principal submatrix of B obtained by deleting
32
the last row and column of blocks. If λmin (C) denotes the minimum eigenvalue of C, then by the
induction assumption,
λ1 (C) + (k − 2)λmin (C) ≤ 0
By Theorem 4.2.5,
λ1 (B) + λn (B) ≤ λ1 (C)
since the minimum eigenvalue of a symmetric matrix does not exceed that of a principal
submatrix,
λn (B) ≤ λmin (C)
i.e,
λ1 (B) + (k − 1)λn (B) ≤ 0
Theorem 4.2.7
Let G be a graph with n vertices and with at least one edge. Then
λ1 (G)
χ(G) ≥ 1 −
λn (G)
Proof. Let A be the adjacency matrix of G. If χ(G) = k, then after a relabeling of the vertices of
G, we may write
0 A12 · · · A1k
A21 0 · · · A2k
A=
.. .. ... ..
. . .
Ak1 Ak2 · · · 0
By Theorem 4.2.6,
λ1 (A) + (k − 1)λn (A) ≤ 0
33
If G has at least one edge then the eigenvalues of G are not all equal to zero, and λn (A) < 0.
Thus
λ1 (A) λ1 (G)
χ(G) = k ≥ =1−
λn (A) λn (G)
This completes the proof. □
The energy of a graph G is the sum of the absolute values of its eigenvalues.
Hence if λ1 ,λ2 ,....., λn are the eigenvalues of a graph G of order n, the energy E(G) of G is given
by
Example 4.3.2
E(Kn ) = (n − 1) + (n − 1)| − 1|
= 2(n − 1)
For the complete graph K3 the eigenvalues of the adjacency matrix are -1,-1 and 2. So energy of
K3 = | − 1| + | − 1| + |2|=4.
Definition 4.3.3
Let A and B be matrices of order m × n and p × q, respectively. The Kroneckar product of A and
B, denoted by A ⊗ B is the mp × nq block matrix [aij B].Also by using the definition
(A ⊗ B)(C ⊗ D) = AC ⊗ BD
Theorem 4.3.4
34
Proof. Let P and Q be orthogonal matrices such that
′
P AP = diag(λ1 , ..., λm ),
′
Q BQ = diag(µ1 , ..., µn )
′ ′
Now consider (P ⊗ Q)(A ⊗ In + Im ⊗ B)(P ⊗ Q )
′ ′ ′ ′
= (P ⊗ Q)(A ⊗ In )(P ⊗ Q ) + (P ⊗ Q)(Im ⊗ B)(P ⊗ Q )
′ ′ ′ ′
=P AP ⊗ QQ + P P ⊗ QBQ
=diag(λ1 , ..., λm ) ⊗ In + Im ⊗ diag(µ1 , ..., µn )
The proof is complete in view of the fact that diag(λ1 , ..., λm ) ⊗ In + Im ⊗ diag(µ1 , ..., µn ) is a
diagonal matrix with λi + µj ; i=1,...,m; j=1,...,n on the diagonal.
Result 4.3.5
Theorem 4.3.6
let G be a graph with n vertices. If the energy E(G) of G is a rational number then it must be an
even integer.
Proof. Let λ1 , λ2 , ..., λk be the positive eigenvalues of G. The trace of the adjacency matrix is
zero, and hence the sum of the positive eigenvalues of G equals the sum of the absolute values of
the negative eigenvalues of G. It follows from the definition of energy that E(G)=
2(λ1 + λ2 + ... + λk ). Note that Theorem 4.3.4, λ1 + λ2 + ... + λk is an eigenvalue of
G × G × ... × G,taken k times. The characteristic polynomial of the adjacency is a monic
polynomial with integer coefficients, and a rational root of such a polynomial must be an integer.
Thus, if λ1 + λ2 + ... + λk is rational, then it must be an even integer. □
Definition 4.3.7
Graph G of order n for which E(G) > 2(n − 1) have been called hyperenergetic graphs.
35
Example 4.3.8
• Kn is nonhyperenergetic.
• If n ≥ 5 , L(Kn ) is hyperenergetic.
• If n ≥ 4 L(Kn,n ) is hyperenergetic
Lemma 4.4.1
Let G be a connected graph with n vertices, and let A be the adjacency matrix of G. Then
(I + A)n−1 > 0
Proof. Clearly, (I + A)n−1 ≥ I + A + A2 + ... + An−1 . Since G is connected, for any i ̸= j, there is
an (ij)-path, and the length of the path can be at most n-1. Thus, the (i,j)-element element of
I + A + A2 + ... + An−1 is positive. If i=j, then clearly, the (i,j)-element of I + A + A2 + ... + An−1
is positive. Therefore, (I + A)n−1 > 0 and the proof is complete. □
Lemma 4.4.2
Let G be a connected graph with n vertices, and let A be the adjacency matrix of G. If x ≥ 0 is
an eigenvector of A, then x > 0.
Proof. If Ax = µx, then clearly,µ > 0. We have (I + A)n−1 x = (1 + µ)n−1 x. By Lemma 4.4.1,
(I + A)n−1 > 0 and it follows that x > 0. □
36
Theorem 4.4.3
Let G be a connected graph with n ≥ 2 vertices, and let A be the adjacency matrix of G. Then
the following assertions hold:
(i) A has an eigenvalue λ > 0 and an associated eigenvector x > 0.
(ii) For any eigenvalue µ ̸= λ of A, −λ ≤ µ < λ. Furthermore, −λ is an eigenvalue of A if and
only if G is bipartite.
(iii) If u is an eigenvector of A for the eigenvalue λ, then u = αx for some α.
Proof. (i)Let
n
X
P n = {y ∈ Rn : yi ≥ 0, i = 1, 2, ..., n; yi = 1}
i=1
n n
We define f : P → P as f(y) = P 1 Ay,y n
∈ P . Since G is connected A has no zero column
i (Ay)i
and hence for any y ∈ P n , Ay has at least one positive coordinate. Hence f is well defined.
Clearly, P n is a compact, convex set, and f is a continuous function from P n to itself. By the
well-known Brouwer’s fixed point theorem, there exist x ∈ P n such that f (x)=x. If we let
λ = ni=1 (Ax)i , then it follows that Ax = λx. Now (1 + λ)n−1 x = (I + A)n−1 x > 0 by Lemma
P
4.4.1. Hence, (1 + λ)n−1 x > 0 and therefore x > 0. This proves (i).
(ii)Let µ ̸= λ be an eigenvalue of A and let z be an associated eigenvector, so that Az = µz. Then
n
X
| µ | · | zi |≤ aij | zj |, i = 1, 2, ..., n
j=1
37
Thus | z |= (| z1 |, ..., | zn |)′ is an eigenvector of A for λ, and , as seen in the proof of (i), | zi |> 0,
i=1,2,...,n. Also, Az = −λz gives
X
−λzi = zj , i = 1, ..., n
j∼i
i.e,
X X
λ | zi |=| | zj ||≤ | zj |≤ λ | zi |
j∼i j∼i
Definition 4.4.4
The eigenvalue λ of G as in (i) of Theorem 4.4.3, is called the Perron eigenvalue of G, and the
associated eigenvector x is called a Perron eigenvector.
Theorem 4.4.5
Let G be a graph with n vertices, and let A be the adjacency matrix of G. Then ρ(G) is an
eigenvalue of G and there is an associated nonnegative eigenvector.
Proof. Let G1 , G2 , ..., Gp be the connected components of G, and let A1 , A2 , ..., Ap be the
corresponding adjacency matrices. We assume, without loss of generality, that
ρ(G1 ) = maxi ρ(Gi ). Then by Theorem 4.4.3, there is a vector x > 0 such that A1 x = ρ(G1 )x.
The vector obtained by augmenting x by zeros is easily seen to be an eigenvector of A
corresponding to the eigenvalue ρ(G) = ρ(G1 ). □
38
Lemma 4.4.6
Let G be a connected graph with n vertices, and let H ̸= G be a spanning, connected subgraph
of G. Then ρ(G) > ρ(H).
Proof. Let A and B be the adjacency matrices of G and H, respectively. By Theorem 4.4.3, there
exist vectors x > 0, y > 0, such that Ax = ρ(G)x, By = ρ(H)y. Since 0 ̸= A − B ≥ 0 and since
x > 0, y > 0, then y ′ Ax > y ′ Bx. But y ′ Ax = y ′ (ρ(G)x) = ρ(G)y ′ x and
y ′ Bx = y ′ (ρ(H)x) = ρ(H)y ′ x. Therefore, ρ(G) > ρ(H). □
Lemma 4.4.7
Let G be a connected graph and let A be the adjacency matrix of G. Let µ > 0, x ≥ 0 be such
that Ax ≥ µx, Ax ̸= µx. Then µ < ρ(G).
Proof. By Theorem 4.4.3, there exists y > 0 such that Ay = ρ(G)y. We have
= y ′ Ax − µy ′ x
Lemma 4.4.8
Let G be a connected graph with n vertices and let H ̸= G be a vertex induced subgraph of G.
Then ρ(G) > ρ(H).
39
B C z
Ax =
C′ E 0
Bz
=
′
Cz
ρ(H)z
=
′
Cz
≥ ρ(H)x.
If Ax = ρ(H)x, then by Lemma 4.4.2, x > 0, which is a contradiction. Thus Ax − ρ(H)x ≥ 0,
Ax − ρ(H)x ̸= 0. It follows from Lemma 4.4.7 that ρ(G) > ρ(H). □
Theorem 4.4.9
Let G be a connected graph and let H ̸= G be a subgraph of G. Then ρ(G) > ρ(H).
Proof. Note that H must have a connected component H1 such that ρ(H) = ρ(H1 ), and H1 is a
spanning subgraph of a vertex-induced, connected subgraph H2 of G. If H2 =G then by Lemma
4.4.7, ρ(H1 ) < ρ(H2 ). If H2 ̸= G, then by Lemma 4.4.8, ρ(H2 ) < ρ(G). Also by Lemma 4.4.7,
ρ(H1 ) ≤ ρ(H2 ) (equality holds if H1 = H2 ) and hence rho(H1 ) < ρ(G). This completes the proof.
□
Remark 4.4.10
Theorem 4.4.11
Let G be a connected graph with n vertices. Then ρ(G) is an eigenvalue of G with algebraic
multiplicity 1.
Theorem 4.4.12
Let G be a k-regular graph. Then ρ(G) equals k, and it is an eigenvalue of G. It has algebraic
multiplicity 1 if G is connected.
40
Proof. Let A be the adjacency matrix of G. By Theorem 4.4.3, there exists 0 ̸= x ≥ 0 such that
Ax = ρ(G)x. Since G is k-regular, A1=k1. Hence, 1’Ax=k(1’x) and 1′ Ax = ρ(G)(1′ x).
Therefore, ρ(G)=k. If G is connected then by Theorem 4.4.11, k has algebraic multiplicity 1. □
Theorem 4.4.13
Let G be a connected graph with n vertices, and let A be the adjacency matrix of G. Then for
any y,z∈ Rn , y ̸= 0, z > 0,
y ′ Ay (Az)i
′
≤ ρ(G) ≤ maxi { }
yy zi
Equality holds in the first inequality if and only if y is an eigenvector of A corresponding to ρ(G).
Similarly, equality holds in the second inequality if and only if z is an eigenvector of A
corresponding to ρ(G).
Proof. The first inequality follows from the extremal representation for the largest eigenvalue of a
symmetric matrix. The assertion about equality also follows from the general result about
symmetric matrices.
To prove the second inequality, suppose that for z > 0, ρ(G) > maxi { (Az)
zi
i
}, i=1,2,...,n. Then
Az < ρ(G)z. Let x > 0 be the Perron vector of A so that Ax = ρ(G)x. It follows that
ρ(G)z ′ x = z ′ Ax = x′ Az < ρ(G)x′ z,Which is a contradiction. The assertion about equality is
easily proved. □
Corollary 4.4.14
Let G be a connected graph with n vertices and m edges. Let d1 ≥ ... ≥ dn be the vertex degrees.
Then the following assertions hold:
(i) 2m
n
≤ ρ(G) ≤ d1
1
Pn P p 1
P p
(ii) 2m i=1 i<j,j∼i d i d j ≤ ρ(G) ≤ max i { di j∼i di dj }
Furthermore,equality holds in any of the above inequalities if and only if G is regular.
1′ A1
≤ ρ(G) ≤ maxi (A)i
1′ 1
2m
≤ ρ(G) ≤ d1
n
41
√ √
(ii) Set y=z=[ d1 , ..., dn ]′ in Theorem 4.4.13, we get the desired result.□
42
Chapter 5
5.1 Introduction
Definition 5.1.1
Let G be an undirected graph without loop. If D is the diagonal matrix, indexed by the vertex
set of G such that Dxx is the degree of x, then the Laplacian matrix is defined and denoted by
L=D-A, Where A is the adjacency matrix.
Example 5.1.2
v1 v4 v5
v2 v3
2 0 0 0 0 0 1 0 1 0
0 2 0 0 0 1 0 1 0 0
D= 0 0 , A=0
0 3 0 1 0 1 1
0 0 0 3 0 1 0 1 0 1
0 0 0 0 2 0 0 1 1 0
Then, we have
43
2 −1 0 −1 0
−1 2 −1 0 0
L = D − A = 0 −1 3 −1 −1
−1 0 −1 3 −1
0 0 −1 −1 2
Definition 5.1.3
Remark 5.1.4
There is an important property of the Laplacian matrix L and the signless Laplacian matrix Q is
Q = M M T and L = N N T , if M is the incidence matrix of G and N is the directed incidence
matrix of the directed graph obtained by orienting the edges in an arbitrary way.
Given a simple graph G with n vertices v1 , v2 , ..., vn , its Laplacian matrix Ln×n is defined
element-wise
as:
deg(vi ) if i=j
Li,j (G) = −1 if i ̸= j and vi is adjacent to vj in G
0
otherwise
Or suppose G=(V,E) is a graphwith V={1,2,...,n}. For an edge (u,v)∈ E(G), we define an n × n
1 if i=j and i(u,v)
matrix LG(u,v) by LG(u,v) (i, j)= −1 if i=u and j=v, or viceversa
0
otherwise
44
The Laplacian matrix for the graph G is also defined as
X
LG = LG(u,v)
(u,v)∈E(G)
Lemma 5.2.2
Let X be an n × n matrix with zero row and column sums. Then the cofactors of any two
elements of X are equal.
Proof. Let X(i|j) denote the matrix obtained by deleting row i and column j of X. In X(1|1) add
all the columns to the first column. Then the first column of X(1|1) becomes the negative of
′
[x21 , ..., xn1 ] , in view of the fact that the row sums of X are zero. Thus, we conclude that det
X(1 | 1)= -det X(1 | 2). In otherwords, the cofactors of x11 and x12 are equal. A similar
argument shows that the cofactor of xij equals that of xik , for any i, j, k. Now using the fact that
the column sums of X are zero, we conclude that the cofactor of xij equals that of xkj , for any i,
j, k. It follows that the cofactors of any two elements of X are equal. □
Theorem 5.2.3
Proof. Since the row and the column sums of LG are zero.By Lemma 5.2.2, the cofactors of any
two elements of LG are equal.
Definition 5.2.4
The Laplacian Matrix of a graph is symmetric and consist of real entries. Thus LG = L∗G where
L∗G is the conjugate transpose of LG . Therefore LG is self adjoint.
Theorem 5.2.5
Proof. Suppose λ is an eigenvalue of the self adjoint matrix L and v is a nonzero eigenvector of λ.
45
Then
λ∥v∥2 = λ⟨v, v⟩
= ⟨λv, v⟩
= ⟨Lv, v⟩
= ⟨v, Lv⟩
= ⟨v, λv⟩
= λ̄⟨v, v⟩
= λ̄∥v∥2
since v ̸= 0 , ∥v∥ =
̸ 0. Hence λ = λ̄ and so λ is real. □
Theorem 5.2.6
xT LG x = xT (D − A)x
= xT Dx − xT Ax
n
X X
= deg(u)x2u − 2xu xv
u=1 (u,v)∈E(G)
X
= (x2u + x2v − 2xu xv )
(u,v)∈E(G)
X
= (xu − xv )2
(u,v)∈E(G)
Definition 5.2.7
46
Theorem 5.2.8
Lemma 5.2.9
For any graph G, λ1 =0 for LG . If G=(V,E) is a connected where V={1,2,...,n}, then λ2 > 0
Pn
Proof. Let x̄ = (1, 1, 1, ..., 1) ∈ Rn . Then the ath
i entry of the matrix A = LG x̄ is ai = k=1 lik .
Then mi =0 since the row entries of LG should add up to zero. So LG x̄=0. Therefore, 0 is an
eigenvalue of LG . Since 0 ≤ λ1 ≤ λ2 ≤ ... ≤ λn it follows that λ1 =0.
Now we have to show that λ2 > 0 for a connected graph. Since 0 is an eigenvalue of LG , let ȳ be a
non-zero eigenvector of 0. Then ȳ T LG ȳ=ȳ T 0 = 0. Therefore for any (u,v) such that (u, v) ∈ E(G)
such that yu = yv . Since G is connected, this means yi = yj for all i, j ∈ V . Therefore
1
1
.
y = α .. , where α is some real number.
1
1
So Uλ1 =span(1,1,...,1), Where Uλ1 is the eigen space of λ1 . Therefore, the multiplicity of
eigenvalue 0 is 1. It follows that λ2 =0, so λ2 > 0
Theorem 5.2.10
Let G=(V,E) be a graph. Then the multiplicity of 0 as an eigenvalue of LG equals the number of
connected components of G.
Proof. Suppose that G1 = (V1 , E1 ), G2 = (V2 , E2 ), ..., Gk = (Vk , Ek ) be the connected components
of G. Let z̄i be denoted by
1 if j ∈ Vi
(z̄i )j =
0 otherwise
47
Then it follows from Lemma 5.2.9, that x̄ ∈ Rn is a non-zero eigenvector of 0, then xi = xj for all
i, j ∈ V such that i,j are in the same connected component.
So Uλ1 =span({z¯1 , z¯2 , ..., z¯k }). It is clear that {z¯1 , z¯2 , ..., z¯k } are linearly independent. Therefore,
the multiplicity of 0 as an eigenvalue of LG is the number of connected components in G.
Theorem 5.3.1
The eigenvalues of the n × n matrix aI+bJ are a with multiplicity n − 1, and a+nb with
multiplicity 1.
Proof. As observed in the proof of Theorem 3.2.4, the eigenvalues of J are 0 with multiplicity n-1
and n with multiplicity 1. It follows that the eigenvalues of bJ are 0 with multiplicity n-1 and nb
with multiplicity 1. Then the eigenvalues of aI+bJ must be a with multiplicity n-1, and a+nb
with multiplicity 1.
Remark 5.3.2
It follows from Theorem 5.3.1 that L(Kn ) = nI − J has eigenvalues n with multiplicity n-1 and 0
with multiplicity 1.
Theorem 5.3.3
Let G be a graph with V (G) = {1, 2, ..., n}. Let the eigenvalues of L(G) be
λ1 ≥ λ2 ≥ ... ≥ λn−1 ≥ λn = 0. Then the eigenvalues of L+aJ are λ1 ≥ λ2 ≥ ... ≥ λn−1 and na.
Proof. There exists an orthogonal matrix P whose columns form eigenvectors of L(G). We
assume that the last column of P is the vector with each component √1 ; this being an
n
′
eigenvector for the eigenvalue 0. Then P L(G)P = diag(λ1 , ..., λn ). Note that any column of P
48
other than the last column is orthogonal to the last column, and hence
√
0 ··· 0 n
√
0 · · ·
0 n
JP = .. ..
. .
√
0 ··· 0 n
′
It follows that P JP = diag(0, ..., 0, n). Therefore,
′
P (L(G) + aJ)P = diag(λ1 , ..., λn−1 , na)
Theorem 5.3.4
Let G be the graph obtained by removing p disjoint edges from Kn , n ≥ 2p. Then the eigenvalues
of L(G) are n-2 with multiplicity p, n with multiplicity n-p-1, and 0 with multiplicity 1.
are
removed from Kn to obtain G. Then L(G)+J is a block diagonal matrix, in which the block
n−1 1
appears p times and nIn−2p appears once. Therefore, the eigenvalues of L(G)+J
1 n−1
are n-2 with multiplicity p, and n with multiplicity n-p. It follows by Theorem 5.3.3 that the
eigenvalues of L(G) are n-2 with multiplicity p,n with multiplicity n-p-1, and 0 with multiplicity
1.
5.4.1 Chemistry
49
One of the main applications of graph spectra to chemistry is the application in a theory of
unsaturated conjugated hydrocarbons known as the H ückel molecular orbital theory.
h i
Within the framework of the H ückel method, the Hamiltonian matrix H = hij is a square
matrix of order n, where n is the number of carbon atoms in the molecule. Let these carbon
atoms be labelled by 1, 2, ..., n. Then the matrix elements hrs are given by
α if r=s=1, 2,...,n
hrs = β if r̸= s and the atoms r and s are chemically bonded
0
if r̸= s and no chemical bond between the atoms r and s exists.
The parameters α and β are called the Coulomb and the resonance integral; in Hückel theory
these are assumed to be constants. Then the Hückel Hamiltonian matrix can be represented as
H = αIn + βA
where A is a symmetric matrix whose diagonal elements equal 0 and whose off-diagonal elements
equal 1 or 0, depending on whether the corresponding atoms are connected or not. In fact A is
just the adjacency matrix of the Hückel graph.
5.4.2 Physics
Treating the membrane vibration problem by approximative solving of the corresponding partial
differential equation leads to consideration of eigenvalues of a graph which is a discrete model of
the membrane.
The spectra of graphs, or the spectra of certain matrices which are closely related to adjacency
matrices appear in a number of problems in statistical physics. We shall mention the so-called
dimer problem.The dimer problem is related to the investigation of the thermodynamic
properties of a system of diatomic molecules absorbed on the surface of a crystal.
There are several important applications in computer science. Graph spectra appear in internet
technologies, pattern recognition, computer vision and in many other areas. One of the
applications of graph eigenvalues in Computer Science is related to graphs called expanders. The
largest eigenvalue λ1 plays an important role in modelling virus propagation in computer
50
networks. The smaller the largest eigenvalue, the larger the robustness of a network against the
spread of viruses.
Networks appearing in biology have been analyzed by spectra of normalized graph Laplacian.
Research and development networks are studied by the largest eigenvalue of the adjacency
matrix.
51
Chapter 6
CONCLUSION
Spectral graph theory is the study about the eigenvalues and eigenvectors of matrices that
associated with graph.i.e, this is an application of both graph theory and linear algebra. This
spectral properties has a vast appliactions in various fields other than Mathematics some of them
are chemistry, Physics,Biology, Geography etc. Here we dicuss the properties of adjacency matrix
and laplacian matrix and there is other matrices like incidence matrix.We only discuss about the
eigenvalues of some fundamental graph but their is some other graphs such as strongly regular
graphs,line graph, cayley graph, complement graph, Ramanujan graph and so on.Spectral graph
theory provides some useful results and algorithms about finding the eigenvalues of matrices
related to graph.
52
REFERENCE
1. John Clark, Derek Allan Hotton,”A first look at Graph Theory”,World Scientific
Publishing.
2. R. Balakrishnan, K. Ranganathan,”A Textbook of Graph Theory, second edition”, Springer
Publishing
3. Ravindra B.Bapat,”Graphs and Matrices, second edition”, Springer Publishing.
4. Andries E.Brouwer, Willem H.Haemers,”Spectra of Graphs”, Springer Publishing.
5. Chung R.K Fan,”Lectures on Spectral Graph Theory”,AMS Publications,1995.
53