Project 4 - Computational Physics
Project 4 - Computational Physics
An approximation of the energy per spin’s probability The total energy of the system is given by Equation 1
distribution in our system () can be found using the
probability function for a given state ~s by approaching N
it through a numerical method. (1)
X
E(~s) = −J sk sl
hkli
While doing our study, we will mostly use the energy In a situation in which the values of a variable are
per spin (), described in Equation 3, and magnetization discrete, like ours, and being them given by a probability
per spin (m), defined in Equation 4, to ease the compar- function p(x), the expected value of a function A(x) is
ison of results for different lattice sizes: defined as:
E(~s)
(3)
X
(~s) = hA(x)i = (A(x) · p(x)).
N
all possible x
1 −βE(~s)
p(~s; T ) = e (7) h|m|i ∝ |T − Tc (L = ∞)|β (11)
Z
CV ∝ |T − Tc (L = ∞)| −α
(12)
where the normalization constant Z is the partition func-
tion (you can check it taking a look at Equation 9), and χ ∝ |T − Tc (L = ∞)| −γ
(13)
β is the “inverse temperature”, whose expression is in ξ ∝ |T − Tc (L = ∞)| −ν
(14)
Equation 8:
with the critical exponents β = 1/8, α = 0, γ = 7/4 and
1 ν = 1. Clearly CV , χ and ξ (the correlation length, which
β= (8) for a finite system is: ξ = L) diverge when T = Tc (L).
kB T
With this behaviour it is easy to find the scaling relation:
where kB is the Boltzmann constant, whose value is
kB = 1.38 · 10−23 .
Tc (L) − Tc (L = ∞) = aL−1 (15)
The partition function Z, which describes the statisti-
cal properties of a system in thermodynamic equilibrium, which can be used in a linear regression to find our ap-
is given by: proximated result for Tc (L = ∞).
3
• hi = −1.992 J
Algorithm 1 Markov chain Monte Carlo (MCMC)
• h|m|i = 0.9973
s0 ← −si . Flip a spin from the lattice spin
r ← U (0, 1) . Save a random number r for the acceptance. • CV = 0.06391 kB
s0 )
A ← min{1, p(~ } . Save the acceptance for later.
• χ = 0.7989
p(~
s) 1
si+1 ← s if r ≤ A
0
. Accept s0 . J
si+1 ← si if r > A . Reject s0
As for the results obtained with the Markov chain
Monte Carlo algorithm:
/J
unordered state.
1.9990
The samples generated during the burn-in time
correspond to regions that are far away from the mean 1.9995
value of the distribution, i.e. regions that actually have a
very low probability. In order to give the Markov chain 2.0000
time to reach its equilibrium distribution it is needed to 0 100000 200000 300000 400000 500000
start at a point closer to the equilibrium distribution, so Monte Carlo cycles
these samples generated during the burn-in time need to
be discarded. FIG. 1. Evolution of hi with the number of Monte Carlo
cycles starting from all the spins pointing the same way and
for a temperature of T = 1J/kB .
Evolution of |m| with the number of Monte Carlo cycles for T=1J/kB (ordered)
1.0000
0.9999
0.9998
0.9997
|m|
0.9996
0.9995
0.9994
0.9993
0.9992
0 100000 200000 300000 400000 500000
Monte Carlo cycles
FIG. 2. Evolution of h|m|i with the number of Monte Carlo FIG. 4. Evolution of h|m|i with the number of Monte Carlo
cycles starting from all the spins pointing the same way and cycles starting from all the spins pointing the same way and
for a temperature of T = 1J/kB . for a temperature of T = 2.4J/kB .
Evolution of with the number of Monte Carlo cycles for T=2.4J/kB (ordered)
1.3
1.4
1.5
1.6
/J
1.7
1.8
1.9
2.0
0 100000 200000 300000 400000 500000
Monte Carlo cycles
FIG. 6. Evolution of h|m|i with the number of Monte Carlo FIG. 8. Evolution of h|m|i with the number of Monte Carlo
cycles starting from all the spins pointing at random initial cycles starting from all the spins pointing at random initial
states and for a temperature of T = 1J/kB . states and for a temperature of T = 2.4J/kB .
40
1
Probability density/J
30
20
10
0
2.03 2.02 2.01 2.00 1.99 1.98 1.97 1.96 1.95
/J
FIG. 7. Evolution of hi with the number of Monte Carlo
cycles starting from all the spins pointing at random initial
FIG. 9. Normalized histogram of hi for T = 1.0J/kB
states and for a temperature of T = 2.4J/kB .
2.0
Probability density/J
0.5
X
Z= e−βE(s) =
all possible s
0.0
1.8 1.6 1.4 1.2 1.0 0.8
/J
= e8βJ + 4e0 + 4e0 + 2e−8βJ + 4e0 + e−8βJ =
FIG. 10. Normalized histogram of hi for T = 2.4J/kB
= 3e−8βJ + 12 + e8βJ
7
Energy per spin at different temperatures for diffferent lattice sizes Susceptibility at different temperatures for diffferent lattice sizes
1.2 L=40 L=40
L=60 60 L=60
L=80 L=80
1.3 L=100 50 L=100
Energy per spin ( ) [J]
Susceptibility ( ) [J 1]
40
1.4
30
1.5
20
1.6 10
0
2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.10 2.15 2.20 2.25 2.30 2.35 2.40
Temperature [J/kb] Temperature [J/kb]
FIG. 11. Energy per spin () for different temperatures and FIG. 14. Susceptibility (χ) for different temperatures and
lattice sizes. lattice sizes.
Magnetization per spin at different temperatures for diffferent lattice sizes Linear regression of Tc(L) and L 1 (CV results)
0.9
L=40 2.285
0.8 L=60
L=80 2.280 R 2 = 0.53
Magnetization per spin (|m|)
0.7 L=100
0.6 2.275
Tc(L)/ J kB 1
2.270
0.5
2.265
0.4
2.260
0.3
2.255
0.2
2.250
0.1
2.245
2.10 2.15 2.20 2.25 2.30 2.35 2.40 0.010 0.012 0.014 0.016 0.018 0.020 0.022 0.024
Temperature [J/kb] L 1
FIG. 12. Magnetization per spin (|m|) for different tempera- FIG. 15. Linear regression of Tc and L−1 for the results of
tures and lattice sizes. the heat capacity (CV ).
Heat capacity at different temperatures for diffferent lattice sizes Linear regression of Tc(L) and L 1 ( results)
L=40 2.38
3.0 L=60 2.36 R 2 = 0.97
L=80
L=100 2.34
Heat capacity (Cv) [kb]
2.5
2.32
Tc(L)/ J kB 1
2.0
2.30
1.5
2.28
1.0 2.26
2.24
0.5
2.10 2.15 2.20 2.25 2.30 2.35 2.40 0.010 0.012 0.014 0.016 0.018 0.020 0.022 0.024
Temperature [J/kb] L 1
FIG. 13. Heat capacity (CV ) for different temperatures and FIG. 16. Linear regression of Tc and L−1 for the results of
lattice sizes. the heat capacity (χ).
BIBLIOGRAPHY 8
16
X = (2 + e8βJ + e−8βJ )
hEi = E(s) · p(s) = Z
all possible s
1 1
= (−8Je8βJ + 2 · 8Je−8βJ + 8Je−8βJ ) = CV = (hE 2 i − hEi2 ) =
Z kB · T 2 · N
8J
= (3e−8βJ − e8βJ ) 64J 2 (3e−8βJ − e8βJ )2
Z = 2
[e8Jβ + 3e−8Jβ − ]=
N · Z · kB · T Z
Then
2J 192J 2
= (3e−8βJ − e8βJ ) = (1 + e8Jβ + 3e−8Jβ )
Z Z 2 · kB · T 2
X
hE 2 i = (E 2 (s) · p(s)) =
1
all possible s χ= (hM 2 i − h|M |i2 ) =
kB · T · N
1
= (64J 2 e8βJ + 64J 2 e−8βJ + 2 · 64J 2 e−8βJ ) =
Z 16
= [Z(e8βJ +2+e−8βJ −(4+e8βJ +e−8βJ )2 ] =
Z 2 · kB · T · N
64J 2 8Jβ
= (e + 3e−8Jβ )
Z
8
= [e−16Jβ + 3e8Jβ + 5(1 + e−8Jβ )]
X Z 2 · kB · T
h|M |i = M (s) · p(s) =
all s
1
= (4eβ8J + 2 · 4 + 0 + 2 · 4 + 4e−8Jβ ) = BIBLIOGRAPHY
Z