Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views30 pages

Report

This document discusses the application of Random Matrix Theory (RMT) to Lattice Quantum Chromodynamics (LQCD), particularly focusing on the Wilson fermion problem. It introduces random matrices and Gaussian ensembles, analyzes their spectral properties, and proposes a random matrix model for the Wilson Dirac operator, revealing discrepancies with established results. The report emphasizes the need for further understanding of the RMT model in this context.

Uploaded by

Nehal Khosla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views30 pages

Report

This document discusses the application of Random Matrix Theory (RMT) to Lattice Quantum Chromodynamics (LQCD), particularly focusing on the Wilson fermion problem. It introduces random matrices and Gaussian ensembles, analyzes their spectral properties, and proposes a random matrix model for the Wilson Dirac operator, revealing discrepancies with established results. The report emphasizes the need for further understanding of the RMT model in this context.

Uploaded by

Nehal Khosla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Application of Random Matrix Theory in Lattice Quantum

Chromodynamics

Submitted By
Nehal Khosla
School of Physical Sciences
National Institute of Science, Education and Research (NISER), Bhubaneswar

Under the Guidance of


Dr. Subhasish Basak
School of Physical Sciences
National Institute of Science, Education and Research (NISER), Bhubaneswar
Acknowledgement

I would like to express my deep and sincere gratitude to my supervisor Prof. Subhasish Basak
for our insightful and extensive discussions.

Nehal Khosla
5th Year Integrated M.Sc.
School of Physical Sciences
NISER, Bhubaneswar

i
Abstract

This report explores the application of Random Matrix Theory (RMT) to Lattice Quantum
Chromodynamics (LQCD), focusing on the Wilson fermion problem. Beginning with an
introduction to random matrices and Gaussian ensembles, we analyze their spectral
properties, emphasizing eigenvalue distribution, level spacing, and the Vandermonde
determinant. We propose a random matrix model to represent the Wilson Dirac operator and
study the spectral behavior using computational methods. Comparisons with established
results reveal discrepancies, highlighting our current lack of understanding of the RMT model.

ii
Contents

Acknowledgement i

Abstract ii
Page
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Random Matrices and Gaussian Ensembles . . . . . . . . . . . . . . . . . 1
2 Random Matrix Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 i.i.d. Spacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Level Repulsion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 JPDF of Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Vandermonde Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 A Unique Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Random Matrix Model for the Wilson Fermion Problem . . . . . . . . . . . . . 7
4.1 The Wilson Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Matrix Representation of the Wilson Dirac Operator . . . . . . . . . . . 9
4.3 The Interaction Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.4 Computational Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Appendix 13
1 JPDF of GOE Matrix entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 i.i.d. Spacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Eigenvalue spacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Average Spectral Density of eigenvalues of an N × N Gaussian matrix . . . . . . 17
5 Coulomb Gas Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6 Matrix Representation of Wilson Fermion . . . . . . . . . . . . . . . . . . . . . 21
6.1 The Mass Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1 Introduction
A random matrix is a mathematical construct where the elements are random variables.
While this concept might initially seem straightforward, its significance lies in its
wide-ranging utility across various domains. To appreciate its value, one can draw an analogy
with random variables in probability theory. For instance, if one tosses a thousand fair coins,
statistical reasoning confidently predicts that the number of tails will cluster around 500.
While such a result might seem simplistic, it highlights a broader principle: systems governed
by randomness often exhibit predictable statistical properties at large scales. This principle
underpins the field of Random Matrix Theory (RMT), which extends this probabilistic
approach from numbers to matrices.
Rather than analyzing complex deterministic matrices directly, RMT replaces them with
random matrices and focuses on their average behavior and statistical properties. This
approach proves invaluable when studying systems that are otherwise intractable.
Just as statistical mechanics replaces deterministic laws with probabilistic distributions to
describe macroscopic systems, RMT employs randomness to unlock insights into the behavior
of complex matrices.

1.1 Random Matrices and Gaussian Ensembles


Random matrices consist of i.i.d. (independent and identically distributed) random variables.
This means that the variables are taken from a uniform probability distribution and all of
them are sampled independently. We restrict ourselves to random matrices with real
eigenvalues, which are relevant for our physical applications. In this case, we concern
ourselves with the Gaussian Orthogonal Ensemble (GOE) and the Gaussian Unitary
Ensemble (GUE).
Consider the following construction:

• Consider an N × N random matrix with it’s elements sampled from the Gaussian
distribution N (0, Σ) (the mean at µ = 0 and it’s standard devition σ = Σ. Call this
matrix H.

• To obtain a matrix with real eigenvalues, we generate Hs by symmetrizing H, as follows:

H + HT
Hs =
2
where H T is the transpose of H. This generates a real, N × N , symmetric matrix (Hs )
whose eigenvalues are always real.

The above construction generates an instance of the Gaussian Orthogonal Ensemble (GOE).
It is important to note that the matrices of this ensemble are not orthogonal.
If we change the entries of H from real to complex entries with both the real and imaginary
parts being sampled from the same Gaussian distribution, and then symmetrize it, we
generate an instance of the Gaussian Unitary Ensemble (GUE). Gaussian Unitary matrices
have complex entries and are hermitian. [6]

Properties of Gaussian Random Matrices

• Joint Probability Density Function (JPDF) for entries of a GOE Matrix:

1
Consider a real, symmetric matrix Hs . It has N (N + 1)/2 independent elements. The,
the JPDF for all N (N + 1)/2 independent elements of Hs is given by (see Appendix):

N  N
(Hs )2ii Y Y 1

Y 1
√ exp −(Hs )2ij .

p[(Hs )] = √ exp − (1.1)
i=1
2π 2 i=1 j>i
π

This can be further simplified to

PN N
!
2
1 i=1 (Hs )ii
XX
p[(Hs )] = exp − − (Hs )2ij . (1.2)
(2π) (π)N (N −1)/4
N/2 2 i=1 j>i

• Rotational Invariance:
The nomenclature of GOE comes from the property that it’s instance can be
diagonalized by an orthogonal matrix as:

Hs = OXOT

where X = diag(x1 , x2 , ..., xN ) and O is an orthogonal matrix characterized by the


property OOT = I.
Similarly to GOE, GUE matrices can be diagonalized by a unitary matrix as

Hs = U ΛU †

where Λ =diag(x1 , x2 , ..., xN ) and U is a unitary matrix characterized by U U † = I.


This is an important feature of these ensembles for modeling physical systems.

• Gaussian Ensembles are the only ones that lie in the intersection of matrix families with
independent entries and matrix families with rotational invariance. [5]

2 Random Matrix Spectra

2.1 i.i.d. Spacing


Consider N i.i.d. real random variables X1 , X2 , . . . , XN , each drawn from the same parent
probability density function (pdf) pX (x), defined on a domain σ. The cumulative distribution
function (cdf) for these variables is F (x), which represents the probability that a random
variable is less than or equal to x:

Z x
F (x) = pX (t) dt. (2.1)
−∞

We aim to determine the conditional probability density pN (s | Xj = x), which describes the
likelihood of observing a gap of size s between two variables, one of which is located near x,
with no other variables lying in between. N is the number of i.i.d. random variables.
In the large N limit, the spacing distribution is given by (see Appendix):

2
s′
 

pN s= ) → e−s (2.2)
N pX (x)

To see this for ourselves, we generate a large number of random variables to verify the above
expression:

Fig. 1: A plot of probability distribution function of i.i.d. spacing

As is evident from the above plot, the probability distribution of i.i.d. random variable
spacing is maximum when s → − 0. Thus, these random variables attract each other. We will
see contrasting results when we determine the probability distribution for eigenvalue spacing
in the case of Gaussian random matrices.

2.2 Level Repulsion


We wish to understand eigenvalue spacing and how it contrasts with i.i.d. spacing. To do so,
we consider a 2 × 2 GOE matrix with eigenvalues λ1 and λ2 . We define the spacing as
s = λ2 − λ2 where λ1 < λ2 . Then, the spacing distribution is given by the following expression
(see Appendix):

s 2
p(s) = e−s /4 (2.3)
2

This result shows that the spacing s has a small probability of being near zero, reflecting the
phenomenon of level repulsion, where eigenvalues avoid clustering.
When normalized by the mean spacing ⟨s⟩, the distribution takes the form:

π − πs2
p(s) = se 4 . (2.4)
2
Again, we generate a large number of 2 × 2 GOE matrices, determine their eigenvalues and
calculate the eigenvalue spacing to achieve the following plot:

3
Fig. 2: A plot of probability distribution function of eigenvalue spacing for GOE

We can see from the above figure that the probability distribution attains a maximum for a
non zero spacing. This means the two eigenvalues repel each other and the probability of
them coming closer is slight. Also, we see that eigenvalues attract at farther distances as
s→− ∞. This is evident in the expression for JPDF of eigenvalues.

2.3 JPDF of Eigenvalues


The joint probability density function (jpdf) for the eigenvalues of an N × N Gaussian matrix
is given by:

N
!
1 1X 2 Y
ρ(x1 , . . . , xN ) = exp − x |xj − xk |β (2.5)
ZN,β 2 i=1 i j<k

where ZN,β is the normalization constant:

N jβ

N/2
Y Γ 1+ 2
ZN,β = (2π) β
 (2.6)
j=1
Γ 1 +2

which ensures the total probability is normalized, such that:

Z
dx1 dx2 . . . dxN ρ(x1 , . . . , xN ) = 1 (2.7)
RN

The parameter β, which takes values 1 for GOE and 2 for GUE, is known as the Dyson index.
It is indicative of the number of random variables required for specifying a single entry in an
instance of the random matrix ensemble. From now on, dx refers to the product of the
differentials of all eigenvalues. It is important to note that the eigenvalues are considered
unordered in this formulation.
PN 2 
1
The factor exp − 2 i=1 xi suppresses configurations where eigenvalues are far from the
origin, thus favoring eigenvalues that are closer to zero. On the other hand, the product

4
|xj − xk |β discourages configurations where any two eigenvalues are too close to one
Q
j<k
another. This leads to the so-called ”repulsion” between eigenvalues.
The presence of this repulsion factor also results in a strong dependence between the
eigenvalues. As a consequence, the jpdf ρ(x1 , . . . , xN ) cannot be factored into independent
parts, making it impossible to treat the eigenvalues as independent random variables. This
strong inter-dependence invalidates the standard methods typically used for independent
random variables.

Now, to determine the shape of the histogram of the N × T eigenvalues (N eigenvalues each
for T instances), for large T , we focus on the marginal distribution of eigenvalues. Specifically,
we compute:

Z Z
ρ(x) = ··· dx2 . . . dxN ρ(x, x2 , . . . , xN ), (2.8)

which provides the desired histogram profile for any finite N . The function ρ(x) is normalized
to 1, consistent with the histogram’s normalization. We see than ρ(x) is given by (see
Appendix):
N
1 X
ρ(x) = ⟨n(x)⟩ = ⟨ δ(x − xi ) (2.9)
N i=1

The function ⟨n(x)⟩ = ρ(x) is often referred to as the average spectral density. It provides a
measure of how the eigenvalues are distributed, on average, along the real line. To illustrate,
consider a scenario where samples of N = 8 eigenvalues are taken for a large T. While the
individual eigenvalues appear as discrete spikes at specific locations, their ensemble-averaged
behavior gives rise to the smooth, continuous profile ρ(x) = ⟨n(x)⟩, capturing the overall
distribution. This is quite evident especially for GUE:

Fig. 3: Plot of eigenvalue distribution for N=8 for GOE and GUE

In the limit as N → ∞, the spectral density satisfies the scaling relation:

p p
lim βN ρ( βN x) = ρSC (x), (2.10)
N →∞

5
where ρSC (x) is given by:

1√ √
ρSC (x) = 2 − x2 , |x| ≤ 2. (2.11)
π

This result is known as Wigner’s semicircle law (see Appendix for proof using Coulomb Gas
Technique). Despite
√ the name, the density has a semi-elliptical shape, with a support
confined to |x| ≤ 2. The law characterizes the limiting distribution of eigenvalues of large
random matrices in the Gaussian Orthogonal, Unitary, and Symplectic Ensembles, depending
on the Dyson index β. We verify the law by generating random matrix ensembles with N=200
and T=500:

(a) Verification of Wigner’s semicircle law for (b) Verification of Wigner’s semi-circle law for
GOE GUE

Fig. 4

3 Vandermonde Determinant
The ”repulsive” interaction between eigenvalues in invariant random matrix models can be
expressed using a determinant named the Vandermonde determinant. [6]

3.1 Definition
The Vandermonde determinant for N variables x1 , x2 , . . . , xN is:
 
1 1 ··· 1
 
Y
 x1
 x2 ··· xN 
(xj − xi ) = det  x1
 2
∆N (x) = x22 ··· 2 
xN  . (3.1)
 . .. .. 
i<j  .. ..
 . . . 

−1 −1 −1
xN
1 xN
2 · · · xN
N

This determinant is completely antisymmetric with respect to swapping any two variables.
For example, for N = 3, the determinant expands as:

∆3 (x) = (x2 − x1 )(x3 − x1 )(x3 − x2 ). (3.2)

6
Swapping any two xj ’s introduces a negative sign, reflecting its antisymmetric nature.

3.2 A Unique Property


The determinant has flexibility in its construction. For instance, in a 2 × 2 case:
!
1 1
det = x2 − x1 . (3.3)
x1 x2

If the second row is replaced with a polynomial ax + b, the determinant remains proportional
to x2 − x1 , unaffected by constant terms in the polynomial.
So, rows of the Vandermonde matrix can be replaced with polynomials of the corresponding
variables, preserving the overall determinant structure. If these polynomials are written as
πk (x), they can have arbitrary lower-order terms, up to a constant scaling factor.
The general property becomes:
 
π0 (x1 ) π0 (x2 ) ··· π0 (xN )
 
1  π1 (x1 ) π1 (x2 ) ··· π1 (xN ) 
∆N (x) = det  .. .. .. ..
. (3.4)
a0 a1 · · · aN −1 
 . . . .


πN −1 (x1 ) πN −1 (x2 ) · · · πN −1 (xN )

This principle is particularly useful when the polynomials πk (x) are chosen as orthogonal
polynomials, such as Hermite polynomials . For instance, it is easy to see that:
 
H (x ) H0 (x2 ) H0 (x3 )
 0 1 

,
∆3 (x) = det H1 (x1 ) H1 (x2 ) H1 (x3 ) (3.5)
H2 (x1 ) H2 (x2 ) H2 (x3 )

where Hk (x) are Hermite polynomials.

4 Random Matrix Model for the Wilson Fermion


Problem
We propose to apply the theory of Random Matrices in Quantum Chromodynamics (QCD),
which describes the strong interactions between quarks and gluons. The free Lagrangian for
quarks with a color degree of freedom in QCD is given by:

/ − m)ψ,
LQCD = ψ̄(iD (4.1)

where the covariant derivative is:

Dµ = ∂µ − igAµ (x). (4.2)

Here:

• ψ is the quark field,

7
• Aµ (x) represents the gauge field associated with the SU(3) gauge group.

The introduction of SU(3) gauge symmetry significantly complicates analytical work with the
QCD Lagrangian. Unlike Quantum Electrodynamics (QED), where the coupling constant
α ∼ 1/137 ≪ 1, the QCD coupling constant g is of O(1). As a result, perturbative methods
effective in QED do not work in QCD.
One way to address this problem is by discretizing spacetime and then taking the continuum
limit. However, this approach faces the challenge that multiple discretization schemes can lead
to the same continuum limit. One such scheme is the Wilson discretization, where fermion
fields are placed on lattice sites. Fermions defined in this way are called Wilson fermions.

We follow the the paper by Hehl and Schäfer [3]. In random matrix theory, we replace the
Dirac operator, which incorporates gauge fields, with random matrices from a specific
ensemble to model the strong fluctuations of the Dirac operator observed in lattice gauge
theory calculations. The symmetry properties of the random matrix depend on the
underlying gauge group and the fermion representation. In Quantum Chromodynamics
(QCD), fermions are typically in the fundamental representation of the SU(3) gauge group,
which means the Gaussian Unitary Ensemble (GUE) is generally used. To compare their
/ + m for massive Wilson
findings with those of Kalkreuter [4], who studied the operator D
fermions in an SU(2) gauge field background, Hehl and Schäfer use matrices from the
Gaussian Orthogonal Ensemble (GOE), a prescription established by Verbaarschot in [9].

4.1 The Wilson Action


The Wilson Action is given by:
1 X †
SW ilson = ψ (n)ψ(n)
2κ n
1X †
− (ψ (n)(r − γµ )Uµ (n)ψ(n + µ) + ψ † (n + µ)(r + γµ )Uµ† (n)ψ(n)
2 n,µ
4 X 1
+ 2 [1 − T r(UP (n) + UP† (n))] (4.3)
g P 4

where n labels the lattice sites and n + µ denote the neighbouring sites for a particular n.
ψ(n) is a fermionic field at its conjugate site n. kappa is the hopping parameter. Uµ (n) is a
gauge link variable. γµ are the Dirac gamma matrices. r is the Wilson parameter which we set
to 1. g is the gauge coupling constant. UP (n) is the plaquette, a gauge-invariant loop around
a square on the lattice. It is given by the product of gauge link variables:

UP (n) = Uµ (n)Uν (n + µ)Uµ† (n + ν)Uν† (n)

where µ and ν are two lattice directions.


The Wilson Action is designed to maintain gauge invariance on the lattice, which is critical
for preserving the physical properties of gauge theories like QCD. It mitigates fermion
doubling by including the Wilson term r.

8
4.2 Matrix Representation of the Wilson Dirac Operator
The Mass Term

We begin with the first term in the Wilson action:


1 X †
Smass = ψ (n)ψ(n) (4.4)
2κ n

We decompose the spinor ψ(n) into its left and right-handed components using the projection
operators:
1 1
PL = (1 − γ5 ), PR = (1 + γ5 )
2 2
Expressing the sum over lattice sites as a sum over even and odd sites, inserting I = PR + PL
in the expression and using the properties of the project operators, the mass term simplifies
to (see Appendix):
" #
1 X
(PL ψ † (n))(PL ψ(n)) + (PR ψ † (n))(PR ψ(n))

Smass = (4.5)
2κ even

Using ψ e and ψ o for the even and odd site components respectively, the mass term becomes:
1 h e† e i
Smass = ψL ψL + ψRe† ψRe + ψLo† ψLo + ψRo† ψRo (4.6)

The mass term in the (ψRe , ψLe , ψRo , ψLo ) basis is written in block diagonal form:

ψRe† ψLe† ψRo† ψLo†


1
ψRe 2κ
0 0 0
ψLe 0 1
0 0 (4.7)

1
ψRo 0 0 2κ
0
1
ψLo 0 0 0 2κ

4.3 The Interaction Term


The interaction term is given by:
1 X †
ψ (n)(1 − γµ )Uµ (n)ψ(n + µ) + ψ † (n + µ)(1 + γµ )Uµ† (n)ψ(n)

Sint = − (4.8)
2 n,µ

Similarly to the mass term, we act on Sint with the projection operators and separate even
and odd lattice sites to get the following expression (see Appendix):

1  e† o e† o o† e o† e

Sint / =−
=D ψR ψR + ψL ψL + ψR ψR + ψL ψL
2
1 
e† o e† o o† e o† e

− −ψL γµ̇ ψR − ψR γµ̇ ψL + ψL γµ̇ ψR + ψR γµ̇ ψL
2
iga  e† o e† o o† † e o† † e

− ψR Aµ̇ ψR + ψL Aµ̇ ψL − ψR Aµ̇ ψR − ψL Aµ̇ ψL
2

9
iga  e† o e† o o† † e o† † e

− / / / /
−ψL AψR − ψR AψL − ψL A ψR − ψR A ψL (4.9)
2

Thus, in the (ψRe , ψLe , ψRo , ψLo ) basis, the interaction term can be written in the following block
matrix form:

ψRe† ψLe† ψRo† ψLo†



ψRe 0 0 I − igaA†µ̇ γµ̇ − igaA
/

ψLe 0 0 / I − igaA†µ̇
γµ̇ − igaA (4.10)
ψRo I − igaAµ̇ −γµ̇ − igaA/ 0 0
ψLo −γµ̇ − igaA/ I − igaAµ̇ 0 0

p
Rescaling the fields by 1/ −1/2 and combining the interaction and mass terms in the
(ψRe , ψLe , ψRo , ψLo ) basis, we can write:

 
1 †

0 I − igaA µ̇ γµ̇ − igaA/
 1 † † 

 0 γµ̇ − igaA/ I − igaA µ̇
D/ +m= 2κ  (4.11)
/ 1
 I − igaAµ̇ −γµ̇ − igaA 0
 
2κ 
/ 1
−γµ̇ − igaA I − igaAµ̇ 0 2κ

To study the spectral properties, we must diagonalize the matrix and determine the
eigenvalues. We substitute the blocks of the above determined matrix with appropriate
random matrices.

4.4 Computational Method


The SU(2) gauge fields Aµ are substituted by a 2 × 2 random matrix A (multiplied by a factor
of i/ga) which belongs to the Gaussian orthogonal ensemble (GOE). A / is the direct product of
the 4 × 4 gamma matrices and Aµ . So, the corresponding blocks are replaced by the 8 × 8
random matrix B, also belonging to GOE. We take the direct product of A with I4 and obtain
D/ + m as a 32 × 32 matrix. The scaling factors (standard deviations) for the Gaussian
distributions of both random matrices were determined by Hehl et al. to be ΣA = 2/25 and
ΣB = 8/25.
Thus, using our random matrix prescriptions, we can re-write equation 6.36 as:
 
1
0 I − A B
 2κ 1

 0 B I − A
D/ +m= 2κ  (4.12)
I − A† −B † 1
0
 
2κ 
† † 1
−B I−A 0 2κ

To compare our results with that of Kalkreuter, Hehl et al. multiply the operator with γ5 and
move to the (ψRe , −ψLe , ψRo , ψLo ) basis:

10
 
1

0 I−A B
 1

 0 − 2κ B I − A
/ +m=
D  (4.13)
I − A† B† 1
0 
 

B† I − A† 0 1
− 2κ

4.5 Results
For a large number of instances, we diagonalize the matrix in equation 4.13 to obtain the
eigenvalues and plot a histogram to determine the probability density of the eigenvalues for
varying values of κ. The results are compared to those of Hehl and Schäfer.

11
Fig. 5: Probability density of the eigenvalues is plotted against the eigenvalues themselves. On
the LHS are our results and on the RHS are Hehl and Schäfer’s results for κ = 1/2, 1/4, 1/5, and
1/8.

As is evident from figure 5, the histograms are very qualitatively different. This highlights the
inadequacy of our current understanding in applying random matrix theory to derive the
spectra of Wilson fermions. It underscores the need for further revision and investigation into
the specific methodology the authors used to achieve the reported spectra.

5 Final Remarks
In this report, we applied Random Matrix Theory (RMT) to model the Wilson fermion
problem in Lattice Quantum Chromodynamics, analyzing spectral properties. While
discrepancies were observed when comparing our results to established findings, these
differences likely arise from computational or methodological errors, rather than limitations
inherent to RMT itself. Moving forward, we will look to refine these computations.
Additionally, this work paves the way for extending RMT applications beyond Wilson
fermions to Staggered Fermions and Minimally Doubled Fermions. These frameworks present
unique challenges and opportunities for understanding the spectral behavior of lattice
fermions. By leveraging RMT’s versatility, we aim to explore its potential in capturing the
dynamics of these alternative lattice formulations.

12
Appendix

1 JPDF of GOE Matrix entries


Consider the matrix H with elements Hij , where the entry at the i-th row and j-th column is
a random variable drawn from the standard normal distribution: Hij ∼ N (0, 1). The
probability density function (PDF) for each matrix element is given by

Hij2
 
1
p(Hij ) = √ exp − (1.1)
2π 2

The joint probability density function (JPDF) for all elements of H, assuming independence,
is expressed as

N Y N
Hij2
 
Y 1
p[H] = √ exp − , (1.2)
i=1 j=1
2π 2

which simplifies to

PN !
1 i,j=1 Hij2
p[H] = exp − . (1.3)
(2π)N 2 /2 2

When the matrix H is symmetrized to form Hs , the diagonal elements remain unchanged,
meaning

(Hs )ii = Hii , (1.4)

and their PDF is also Hii ∼ N (0, 1). The off-diagonal elements, however, are symmetrized as

Hij + Hji
(Hs )ij = . (1.5)
2

Since each Hij is independent and follows N (0, 1), the off-diagonal elements of Hs become a
linear combination of two independent standard normal variables, resulting in

13
1
(Hs )ij ∼ N (0, ). (1.6)
2

For the N (N − 1)/2 independent off-diagonal elements in the upper triangle of Hs , the JPDF
is

N Y
Y 1
√ exp −(Hs )2ij .

p[(Hs )ij ] = (1.7)
i=1 j>i
π

Combining the PDFs for the diagonal and off-diagonal elements, the JPDF for all
N (N + 1)/2 independent elements of Hs is

N  N
(Hs )2ii Y Y 1

Y 1
√ exp −(Hs )2ij .

p[(Hs )] = √ exp − (1.8)
i=1
2π 2 i=1 j>i
π

2 i.i.d. Spacing
Consider N i.i.d. real random variables X1 , X2 , . . . , XN , each drawn from the same parent
probability density function (pdf) pX (x), defined on a domain σ. The cumulative distribution
function (cdf) for these variables is F (x), which represents the probability that a random
variable is less than or equal to x:

Z x
F (x) = pX (t) dt. (2.1)
−∞

We aim to determine the conditional probability density pN (s | Xj = x), which describes the
likelihood of observing a gap of size s between two variables, one of which is located near x,
with no other variables lying in between.
Suppose one of the variables, Xj , is fixed around x. The probability that another variable, Xk
(where k ̸= j), is situated at x + s, while all other N − 2 variables are either:
1. To the left of x: This occurs with a probability F (x), or
2. To the right of x + s: This occurs with a probability 1 − F (x + s).
Thus, the conditional probability density is given by:

pN (s | Xj = x) = pX (x + s) [1 + F (x) − F (x + s)]N −2 . (2.2)

Here, pX (x + s) gives the probability density of Xk at x + s, and [1 + F (x) − F (x + s)]N −2


ensures that the remaining N − 2 variables are confined to the regions outside the gap.
Since all variables X1 , X2 , . . . , XN are i.i.d., the above expression for a gap can apply to any of
the N variables being fixed around x. To generalize this, we consider the likelihood of a gap s
without conditioning on a specific variable. This is done by summing over all N variables:

14
pN (s | any X = x) = N pN (s | Xj = x)pX (x), (2.3)

where pX (x) is the probability density of the fixed variable at x.


Finally, to obtain the total probability of a gap s, irrespective of the position of the fixed
variable, we integrate over all possible positions x within the domain σ:

Z
pN (s) = dx pN (s | any X = x). (2.4)
σ

Substituting pN (s | any X = x) into this equation gives:

Z
pN (s) = N dx pX (x + s) [1 + F (x) − F (x + s)]N −2 pX (x). (2.5)
σ

The spacing distribution pN (s) must satisfy the normalization condition:

Z ∞
pN (s) ds = 1. (2.6)
0

Let’s verify this:

Z ∞ Z ∞ Z
pN (s) ds = N ds dx pX (x + s) [1 + F (x) − F (x + s)]N −2 pX (x). (2.7)
0 0 σ

We use the substitution u = F (x + s) in the s-integral, so du = pX (x + s) ds. The limits of


integration change to F (x) (for s = 0) and 1 (as s → ∞):

Z ∞ Z Z 1
pN (s) ds = N dx pX (x) du [1 + F (x) − u]N −2 . (2.8)
0 σ F (x)

Now, perform the u-integral:

1
1 − F (x)
Z
[1 + F (x) − u]N −2 du = . (2.9)
F (x) N −1

Substituting this result back gives:


1 − F (x)
Z Z
pN (s) ds = N dx pX (x) = 1, (2.10)
0 σ N −1

15
which proves the normalization.
When N is large, the spacing s shrinks due to the increasing density of variables. To account

for this, scale s as s = N psX (x) , where s′ ∼ O(1). This scaling reflects the typical spacing
between variables being inversely proportional to their local density N pX (x).
Substituting this scaling into pN (s | Xj = x):

s′
 

pN s= | Xj = x ≈ pX (x)e−s . (2.11)
N pX (x)

Finally, integrating over all x gives the limiting spacing distribution:

s′
 

pN s= ) → e−s (2.12)
N pX (x)

3 Eigenvalue spacing
To understand eigenvalue spacings in random matrices, consider a 2 × 2 symmetric GOE
matrix:

!
x1 x3
Hs = , (3.1)
x3 x2

where x1 , x2 ∼ N (0, 1) (normal distribution with mean 0 and variance 1) and x3 ∼ N (0, 1/2).
The eigenvalues λ1 and λ2 , ordered such that λ2 > λ1 , are random variables determined by
solving the characteristic equation:

λ2 − Tr(Hs )λ + det(Hs ) = 0, (3.2)

where the trace Tr(Hs ) = x1 + x2 and the determinant det(Hs ) = x1 x2 − x23 . The eigenvalues
are:

s 2
x1 + x2 x1 − x2
λ1,2 = ± + x23 . (3.3)
2 2

The spacing s = λ2 − λ1 is given by:

q
s = (x1 − x2 )2 + 4x23 . (3.4)

The probability density function (pdf) of s, denoted p(s), is determined by integrating over
the joint pdf of x1 , x2 , x3 :

16
∞ ∞ ∞ 2 2 2
e−x1 /2 e−x2 /2 e−x3
Z Z Z
√ δ s − (x1 − x2 )2 − 4x23 .

p(s) = dx1 dx2 dx3 √ √ (3.5)
−∞ −∞ −∞ 2π 2π 2π

A change of variables simplifies the integral:

x1 − x2 = r cos θ, 2x3 = r sin θ, x1 + x2 = ψ. (3.6)

The Jacobian determinant for this transformation is J = − 4r , and the integral becomes:

s 2
p(s) = e−s /4 . (3.7)
2

4 Average Spectral Density of eigenvalues of an N × N


Gaussian matrix
Consider a fixed, deterministic matrix H with real eigenvalues (i.e., without randomness).
Define a *counting function* n(x), which represents the fraction of eigenvalues in an interval
[a, b]:

Z b
n(x′ ) dx′ . (4.1)
a

The function n(x) can be expressed as the normalized sum of delta functions, each centered at
an eigenvalue xi :

N
1 X
n(x) = δ(x − xi ). (4.2)
N i=1

Each delta function δ(x − xi ) indicates the presence of an eigenvalue xi at position x.


The behavior of the delta function can be understood through its sifting property:


Z f (x ), if x ∈ I,
0 0
dx δ(x − x0 )f (x) = (4.3)
I 
0, otherwise.

This property ensures that the delta function ”picks out” the value of the function f (x) at x0 ,
provided x0 lies in the interval I.
If now H is a random matrix, the function n(x), which describes the distribution of
eigenvalues, becomes a random measure on the real line. In this context, n(x) depends on the
specific realization of H and varies from one instance to another. However, averaging n(x)
over the ensemble of random eigenvalues {x1 , x2 , . . . , xN } yields a more meaningful quantity

17
that captures the overall behavior:

Z Z
⟨n(x)⟩ = ··· dx ρ(x1 , x2 , . . . , xN )n(x), (4.4)

where the averaging is performed withPrespect to the joint probability density function (jpdf)
ρ(x1 , . . . , xN ). Substituting n(x) = N1 N
i=1 δ(x − xi ), we find:

N Z Z
1 X
⟨n(x)⟩ = · · · dx ρ(x1 , . . . , xN )δ(x − xi ). (4.5)
N i=1

This simplifies to:

⟨n(x)⟩ = ρ(x), (4.6)

where ρ(x) is the marginal density of the jpdf:

Z Z
ρ(x) = ··· dx2 . . . dxN ρ(x, x2 , . . . , xN ). (4.7)

The equality ⟨n(x)⟩ = ρ(x) relies on two fundamental properties. First, the delta function
ensures that the integration isolates the contribution from the specific eigenvalue xi in the
jpdf ρ(x1 , . . . , xN ). Second, the symmetry of ρ(x1 , . . . , xN ) under the exchange of eigenvalues
xi ↔ xj guarantees that all eigenvalues contribute equivalently. These properties hold true for
the Gaussian jpdf and remain valid for most random matrix ensembles.

5 Coulomb Gas Technique


We use a statistical mechanics framework called the Coulomb gas technique to derive
Wigner’s semicircle law. The goal is to compute the equilibrium eigenvalue density for
Gaussian ensembles by considering the eigenvalues as particles in a fluid interacting through
long-range forces.

Starting Point: The Joint Probability Distribution of Eigenvalues


The joint probability density function (jpdf) for eigenvalues of a Gaussian random matrix is
given by:
1 − 1 PNi=1 x2i Y
ρ(x1 , . . . , xN ) = e 2 |xj − xk |β ,
ZN,β j<k

where:
1 PN
x2i
• The quadratic term, e− 2 i=1 , confines the eigenvalues near the origin.

18
• The Vandermonde determinant |xj − xk |β introduces a repulsive interaction
Q
j<k
between eigenvalues.
• β = 1, 2, 4 corresponds to the Gaussian Orthogonal, Unitary, and Symplectic Ensembles,
respectively.

The partition function ZN,β , which normalizes the jpdf, is:


Z N
1 PN
x2i
Y Y
ZN,β = dxj e− 2 i=1 |xj − xk |β .
RN j=1 j<k

Rescaling the Eigenvalues for Continuum Analysis

√ the eigenvalue distribution as N → ∞, the eigenvalues are rescaled as


To analyze
xi → xi βN . After rescaling, the partition function becomes:
Z N
2V
Y
ZN,β = CN,β dxj e−βN [x]
,
RN j=1

where the energy functional V [x] is given by:


N
1 X 2 1 X
V [x] = xi − ln |xi − xj |.
2N i=1 2N 2 i̸=j

Here:
1
PN 2
1. The term 2N i=1 xi corresponds to a quadratic confining potential that prevents
eigenvalues from drifting too far.
2. The term − 2N1 2 i̸=j ln |xi − xj | introduces a logarithmic repulsion between eigenvalues.
P

The factor βN 2 in the exponent indicates a thermodynamic limit: as N → ∞, the system


resembles a zero-temperature fluid where particles (eigenvalues) reach equilibrium.

Interpreting the Problem as a Coulomb Gas


The eigenvalues {xi } can be viewed as particles in a Coulomb gas confined to a
one-dimensional line. The quadratic term acts as a confining well, while the logarithmic term
represents the repulsion between particles. In this analogy:

• The system’s free energy is F = − β1 ln ZN,β .

• The equilibrium eigenvalue density minimizes the total energy functional V [x].

Reformulating in the Continuum Limit


As N → ∞, the eigenvalue distribution is described by a continuum density function n(x),
defined as:
N
1 X
n(x) = δ(x − xi ),
N i=1

19
which satisfies: Z
n(x)dx = 1 and n(x) ≥ 0.
R

Using n(x), the sums in the energy functional V [x] are replaced by integrals. For example:
N Z
1 X 2 1
xi → x2 n(x) dx,
2N i=1 2 R
Z Z
1 X 1
2
ln |xi − xj | → n(x)n(x′ ) ln |x − x′ | dx dx′ .
2N i̸=j 2 R R

Thus, the energy functional in terms of n(x) becomes:


Z Z Z
1 1
V [n(x)] = 2
x n(x) dx − n(x)n(x′ ) ln |x − x′ | dx dx′ .
2 R 2 R R
The partition function is now rewritten as:
Z
2
ZN,β = CN,β D[n(x)]e−βN V [n(x)] .

Minimizing the Energy Functional


To find the equilibrium density n(x), theR energy functional V [n(x)] must be minimized
subject to the normalization constraint R n(x)dx = 1. Introducing a Lagrange multiplier λ,
the constrained functional becomes:
Z 
F [n(x)] = V [n(x)] + λ n(x)dx − 1 .
R

The equilibrium condition is obtained by taking the functional derivative of F [n(x)] with
respect to n(x): Z
δF 1 2
= 0 =⇒ x − n(x′ ) ln |x − x′ | dx′ + λ = 0.
δn(x) 2 R

Rearranging, the equilibrium density satisfies:


Z
1
n(x′ ) ln |x − x′ | dx′ = x2 + λ.
R 2

Solving for n(x): Wigner’s Semicircle Law


Solving the above equation yields the well-known Wigner semicircle law :
1√ √
n(x) = 2 − x2 , for |x| ≤ 2,
π
and
√n(x)
√ = 0 otherwise. This result shows that eigenvalues are confined to the interval
[− 2, 2], with the density being maximal near the origin and vanishing at the edges.

20
6 Matrix Representation of Wilson Fermion

6.1 The Mass Term


We begin with the first term in the Wilson action:

1 X †
Smass = ψ (n)ψ(n) (6.1)
2κ n

We now decompose the spinor ψ(n) using left and right projection operators PR andPL which
are defined as:

1 1
PL = (1 − γ5 ), PR = (1 + γ5 ) (6.2)
2 2

These projection operators split the spinor ψ(n) into its left-handed and right-handed
components:

ψL (n) = PL ψ(n), ψR (n) = PR ψ(n) (6.3)

Now, we separate the lattice sites into even and odd sites. The even sites correspond to the
set of sites n such that n is even, and the odd sites correspond to the set of sites n such that n
is odd. Expressing the summation term as a sum over even and odd sites gives:

X X X
ψ † (n)ψ(n) = ψ † (n)ψ(n) + ψ † (n)ψ(n) (6.4)
n even odd

1
Multiplying by 2κ
and inserting I = PR + PL in the summation terms, we get:

" #
1 X † X
Smass = ψ (n)(PL + PR )ψ(n) + ψ † (n)(PL + PR )ψ(n) (6.5)
2κ even odd

Separating all the terms and using the property of projection operators (PL2 = PL and
PR2 = PR ), we get the mass term of the action as:

" #
1 X † X
Smass = ψ (n)PL PL ψ(n) + ψ † (n)PR PR ψ(n) (6.6)
2κ even even

1 hX
(PL ψ † (n))(PL ψ(n)) + (PR ψ † (n))(PR ψ(n))

= (6.7)
2κ even

21
X i
+ (PL ψ † (n))(PL ψ(n)) + (PR ψ † (n))(PR ψ(n)) (6.8)
odd

Taking ψ e and ψ o to be the even and odd site components respectively of the spinor field ψ,
the action thus becomes:

1 h e† e i
Smass = m = ψL ψL + ψRe† ψRe + ψLo† ψLo + ψRo† ψRo (6.9)

Finally, in the (ψRe , ψLe , ψRo , ψLo ) basis, the mass term can be written in the following block
diagonal form:

ψRe† ψLe† ψRo† ψLo†


1
ψRe 2κ
0 0 0
ψLe 0 1
0 0 (6.10)

1
ψRo 0 0 2κ
0
1
ψLo 0 0 0 2κ

The Interaction Term

The interaction term of the action is given by:

1 X †
ψ (n)(1 − γµ )Uµ (n)ψ(n + µ) + ψ † (n + µ)(1 + γµ )Uµ† (n)ψ(n)

Sint = − (6.11)
2 n,µ

We expand the link variables Uµ up to first order in the SU(2) gauge fields Aµ as
Uµ = 1 + igaAµ (and Uµ† = 1 − igaA†µ ). This prescription of Uµ supports our assumed block
structure of the Dirac operator, but is valid within the perturbative regime where the g is
small. But, we are currently focused on the ergodic regime where g is rather large. The
expansion of Uµ is, however, still justified because we wish to study only the general spectral
properties which are same for both regimes.
So, we may write the interaction term as:
1 X †
ψ (n)(1 − γµ )(1 + igaAµ )ψ(n + µ) + ψ † (n + µ)(1 + γµ )(1 − igaAµ )ψ(n)

Sint = −
2 n,µ

1 X †
=− ψ (n)ψ(n + µ) + ψ † (n + µ)ψ(n) (a)
2 n,µ
− ψ † (n)γµ ψ(n + µ) + ψ † (n + µ)γµ ψ(n) (b)
+ ψ † (n)igaAµ ψ(n + µ) − ψ † (n + µ)igaA†µ ψ(n) (c)
† †
µ)igaγµ A†µ ψ(n)

− ψ (n)igaγµ Aµ ψ(n + µ) − ψ (n + (d)

22
Consider (a). We insert I = PR + PL to get:
1 X †
ψ (n)(PR + PL )ψ(n + µ) + ψ † (n + µ)(PR + PL )ψ(n)

− (6.12)
2 n,µ
1 X †
ψ (n)PR PR ψ(n + µ) + ψ † (n)PL PL ψ(n + µ) + ψ † (n + µ)PR PR ψ(n) + ψ † (n + µ)PL PL ψ(n)

=−
2 n,µ
(6.13)
(6.14)

Again, we separate even and odd lattice sites. Taking n to be even, we get:
1 
− − ψRe† ψRo + ψLe† ψLo + ψRo† ψRe + ψLo† ψLe
(a) → (6.15)
2

Now consider (b). Following the same procedure, we get:


1 X †
−ψ (n)γµ ψ(n + µ) + ψ † (n + µ)γµ ψ(n)

− (6.16)
2 n,µ
1 X †
−ψ (n)γµ (PR + PL )ψ(n + µ) + ψ † (n + µ)γµ (PR + PL )ψ(n)

=− (6.17)
2 n,µ
1 X
=− − ψ † (n)γµ PR PR ψ(n + µ) − ψ † (n)γµ PL PL ψ(n + µ) + ψ † (n + µ)γµ PR PR ψ(n)
2 n,µ
(6.18)


+ ψ (n + µ)γµ PL PL ψ(n) (6.19)
1 X
− ψ † (n)PL γµ PR ψ(n + µ) − ψ † (n)PR γµ PL ψ(n + µ) + ψ † (n + µ)PL γµ PR ψ(n)

=−
2 n,µ
(6.20)
+ ψ † (n + µ)PR γµ PL ψ(n)

(6.21)
(6.22)

Note that in the above calculation, we have used the property {γ5 , γµ } = 0. The above
expression gives:
1  e† o e† o o† e o† e

− − −ψL γµ̇ ψR − ψR γµ̇ ψL + ψL γµ̇ ψR + ψR γµ̇ ψL
(b) → (6.23)
2

Now taking (c), we get:


1 X †
ψ (n)igaAµ ψ(n + µ) − ψ † (n + µ)igaA†µ ψ(n)

− (6.24)
2 n,µ
1 X †
ψ (n)igaAµ (PR + PL )ψ(n + µ) − ψ † (n + µ)igaA†µ (PR + PL )ψ(n)

=− (6.25)
2 n,µ
1 X †
=− ψ (n)igaAµ PR PR ψ(n + µ) + ψ † (n)igaAµ PL PL ψ(n + µ) − ψ † (n + µ)igaA†µ PR PR ψ(n)
2 n,µ
(6.26)
− ψ † (n + µ)igaA†µ PL PL ψ(n)

(6.27)

23
iga  e† 
(c) →
− − ψR Aµ̇ ψRo + ψLe† Aµ̇ ψLo − ψRo† A†µ̇ ψRe − ψLo† A†µ̇ ψLe (6.28)
2

Finally for (d), we get:


1 X †
−ψ (n)igaγµ Aµ ψ(n + µ) − ψ † (n + µ)igaγµ A†µ ψ(n)

− (6.29)
2 n,µ
1 X †
−ψ (n)igaγµ (PR + PL )Aµ ψ(n + µ) − ψ † (n + µ)igaγµ (PR + PL )A†µ ψ(n)

=− (6.30)
2 n,µ
1 X
=− − ψ † (n)igaγµ PR PR Aµ ψ(n + µ) − ψ † (n)igaγµ PL PL Aµ ψ(n + µ) − ψ † (n + µ)igaγµ PR PR A†µ ψ(n)
2 n,µ
(6.31)

µ)igaγµ PL PL A†µ ψ(n)

− ψ (n + (6.32)

iga  e† o 
(d) →
− − / R − ψRe† Aψ
−ψL Aψ / † ψRe − ψRo† A
/ Lo − ψLo† A / † ψLe (6.33)
2

Finally, combining (a), (b), (c), and (d), the interaction term is given by:
1  e† o 
/ =−
Sint = D ψR ψR + ψLe† ψLo + ψRo† ψRe + ψLo† ψLe
2
1  e† 
− −ψL γµ̇ ψRo − ψRe† γµ̇ ψLo + ψLo† γµ̇ ψRe + ψRo† γµ̇ ψLe
2
iga  e† 
− ψR Aµ̇ ψRo + ψLe† Aµ̇ ψLo − ψRo† A†µ̇ ψRe − ψLo† A†µ̇ ψLe
2
iga  e† o 
− −ψL Aψ/ R − ψRe† Aψ/ Lo − ψLo† A / † ψRe − ψRo† A/ † ψLe (6.34)
2
The dot (˙) on top of µ is a reminder that it is not a free index but is contracted with the ones
hidden inside the fermionic fields.
In the (ψRe , ψLe , ψRo , ψLo ) basis, the interaction term can be written in the following block
matrix form:

ψRe† ψLe† ψRo† ψLo†



ψRe 0 0 I − igaA†µ̇ γµ̇ − igaA
/

ψLe 0 0 / I − igaA†µ̇
γµ̇ − igaA (6.35)
ψRo I − igaAµ̇ −γµ̇ − igaA/ 0 0
ψLo −γµ̇ − igaA/ I − igaAµ̇ 0 0

p
Rescaling the fields by 1/ −1/2 and combining the interaction and mass terms in the

24
(ψRe , ψLe , ψRo , ψLo ) basis, we can write:

/†
 
1

0 I − igaA†µ̇ γµ̇ − igaA
 1 † † 

 0 γµ̇ − igaA/ I − igaAµ̇ 
/ +m=
D 2κ (6.36)
/ 1
 I − igaAµ̇ −γµ̇ − igaA 0
 
2κ 
/ 1
−γµ̇ − igaA I − igaAµ̇ 0 2κ

25
Bibliography

[1] Freeman J. Dyson. Statistical theory of the energy levels of complex systems. i. Journal
of Mathematical Physics, 3(1):140–156, 01 1962. arXiv:https:
//pubs.aip.org/aip/jmp/article-pdf/3/1/140/19079507/140\_1\_online.pdf,
doi:10.1063/1.1703773.

[2] Yan V. Fyodorov. Introduction to the random matrix theory: Gaussian unitary ensemble
and beyond, 2010. URL: https://arxiv.org/abs/math-ph/0412017,
arXiv:math-ph/0412017.

[3] Holger Hehl and Andreas Schäfer. Random matrix model for wilson fermions on the
lattice. Physical Review D, 59(11), April 1999. URL:
http://dx.doi.org/10.1103/PhysRevD.59.117504,
doi:10.1103/physrevd.59.117504.

[4] Thomas Kalkreuter. Study of cullum’s and willoughby’s lanczos method for wilson
fermions. Computer Physics Communications, 95(1):1–16, 1996. URL:
https://www.sciencedirect.com/science/article/pii/0010465596000045,
doi:10.1016/0010-4655(96)00004-5.

[5] L. R. Book Review: Statistical theories of spectra: fluctuations. C.E. PORTER


(Academic Press, New York, 1965. xv-576 p. 5.95 paper, 9.50 cloth). Nuclear Physics,
78(3):696–696, April 1966. doi:10.1016/0029-5582(66)90915-1.

[6] Giacomo Livan, Marcel Novaes, and Pierpaolo Vivo. Introduction to Random Matrices.
Springer International Publishing, 2018. URL:
http://dx.doi.org/10.1007/978-3-319-70885-0, doi:10.1007/978-3-319-70885-0.

[7] M.L. Mehta. Random Matrices. Academic Press, 1991. URL:


https://books.google.co.in/books?id=-sloQgAACAAJ.

[8] F.G. Tricomi. Integral Equations. Dover Books on Mathematics. Dover Publications,
2012. URL: https://books.google.co.in/books?id=VWLvAgAAQBAJ.

[9] Jacobus Verbaarschot. Spectrum of the qcd dirac operator and chiral random matrix
theory. Phys. Rev. Lett., 72:2531–2533, Apr 1994. URL:
https://link.aps.org/doi/10.1103/PhysRevLett.72.2531,
doi:10.1103/PhysRevLett.72.2531.

[10] E.P. Wigner. Statistical Properties of Real Symmetric Matrices with Many Dimensions.
Princeton University. URL: https://books.google.co.in/books?id=7jsbNAAACAAJ.

26

You might also like