Computational Physics III: Report 2
Fourier transforms and analysis
Due on 3rd May, 2018
Tristan Henchoz
1
Tristan Henchoz Computational Physics III : Report 2
Contents
Problem 1 3
(1) Simple LU decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
(2) Comparison between Matlab’s implementation and ours . . . . . . . . . . . . . . . . . . . . . . 4
(3) LU decomposition and pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Problem 2 6
(1) Wheatstone bridge’s equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
(2) Solution for I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
(3) Solution for given resistances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Problem 3 7
Problem 4 8
Problem 5 9
(1) power implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
(2) ipower implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
(3) Rayleigh quotient implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Problem 6 11
Problem 7 12
(1) classic Jacobi method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
(2) cyclic Jacobi method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Problem 8 15
(1) matrix representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
(2) bound solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
(3) scattering solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Page 2 of 19
Tristan Henchoz Computational Physics III : Report 2
Problem 1
(1) Simple LU decomposition
The goal here is to solve the system of equations (1) using a simple LU decomposition computed
as shown in listing 1, which is used by default by the listing 2, where the system is rewritten as the
matrix equation A~x = ~b.
x1 + 2x2 + 3x3 + 4x4 = 30
−x1 + 2x2 − 3x3 + 4x4 = 10
(1)
x2 − x3 + x4 = 3
x1 + x2 + x3 + x4 = 10
The solution vector is x = (1 2 3 4) , which coincides exactly with the solution obtained using
Matlabs matrix operations.
Listing 1: A script which does the LU decomposition with pivoting.
1 function [ L, U, P ] = lu_decomposition( A )
2 % lu_decomposition computes the LU decomposition with pivoting
3 % for a square matrix A
4 % Return lower L, upper U and permutation matrix P such that P*A=L*U
5
6 % Initializing variables
7 N = length(A);
8 L = eye(N,N);
9 U = A;
10 P = eye(N,N);
11
12 for k = 1:N-1
13 % Checking for the max value on the column
14 [˜, r] = max(abs(U(k:N,k)));
15 r = r + k - 1;
16
17 i f r ˜= k
18 % if not the current row, permutation
19 temp = U(k,:);
20 U(k,:) = U(r,:);
21 U(r,:) = temp;
22
23 temp = P(k,:);
24 P(k,:) = P(r,:);
25 P(r,:) = temp;
26
27 temp = L(k,1:k-1);
28 L(k,1:k-1) = L(r,1:k-1);
29 L(r,1:k-1) = temp;
30 end
31
32 for i = k+1:N
33 L(i,k) = U(i,k)/U(k,k);
34 for j = k:N
35 U(i,j) = U(i,j) - L(i,k) * U(k,j);
36 end
37 end
38 end
39 end
Problem 1 [(1) Simple LU decomposition] continued on next page. . . Page 3 of 19
Tristan Henchoz Computational Physics III : Report 2
Listing 2: A script that solves a matrix equation using different LU decomposition.
1 function [ x ] = solve_ls( A, b, lu_choice)
2 % solve_ls solves the system of linear equations A*x=b, for a square matrix
3 % A and a column vector b, using the LU decomposition and backward
4 % substitution
5
6 i f (nargin == 2)
7 lu_choice = 0;
8 end
9
10 N = length(A);
11
12 switch lu_choice
13 case 1
14 [L U] = lu_decomposition_withoutP(A);
15 case 2
16 [L U P] = lu(A);
17 % permuting b because LU = PA
18 b = P*b;
19 otherwise
20 [L U P] = lu_decomposition(A);
21 % permuting b because LU = PA
22 b = P*b;
23 end
24
25 % resolving Ly = b
26 % usualy, L(i,i) = 1 but...
27 y = b;
28 y(1) = 1 / L(1,1) * b(1);
29 for i = 2:N
30 y(i) = 1 / L(i,i) * (b(i) - sum(L(i,1:i-1) * y(1:i-1)));
31 end
32
33 % resolving Ux = y
34 x = y;
35 x(N) = 1 / U(N,N) * y(N);
36 for j = 1:N-1
37 i = N-j;
38 x(i) = 1 / U(i,i) * (y(i) - sum(U(i,i+1:N) * x(i+1:N)));
39 end
40 end
(2) Comparison between Matlab’s implementation and ours
As it can be seen in figure 1, the difference between our matrix L&U and Matlab’s ones increase
with the matrix size, as expected. This difference is closed to the machine error, which is good.
The difference between our computed matrix L · U and P · A is closed to the difference for the same
matrix using Matlab’s implementation, but as one can see in figure 2, our algorithm use a way
longer time for computation.
Problem 1 [(2) Comparison between Matlab’s implementation and ours] continued on next page.
Page
. . 4 of 19
Tristan Henchoz Computational Physics III : Report 2
Figure 1: Difference between Matlab matrix L&U and our matrix L&U , and between obtained
matrix L · U and P · A
Figure 2: Time used by the LU algorithm’s for different matrix’ size
(3) LU decomposition and pivoting
In order to properly study the impact of a pivot in the LU decomposition algorithm, a case study
is made on the following matrix :
1 2 0
M = 2 4 8
3 −1 2
In this case, the algorithm was applied straight away on M rather than on P · M . The obtained
result, shown in equation (2), has elements of value infinity or NaN (not a number). This is due to
the potential division by zero within the algorithm. This phenomenon is a general problem which
happens when applying the algorithm to any random square matrix, leading to nearly all tests from
Problem 1 [(3) LU decomposition and pivoting] continued on next page. . . Page 5 of 19
Tristan Henchoz Computational Physics III : Report 2
the script test lu.m to fail.
1 0 0 1 2 0
L = 2 1 0 U = 0 0 8 (2)
3 −Inf 1 0 NaN Inf
In order to avoid this from happening, it is necessary to introduce pivoting by implementing a
permutation matrix within the algorithm, following the method presented in class. This code, which
is displayed in listing 1, gives us the three matrices of equation (3). In addition, its application to
random matrices works (all tests from test lu.m passed).
−1
1 0 0 3 2 0 0 1
L = 0.6667 1 0 U = 0 4.6667 6.6667 P = 0 1 0 (3)
0.3333 0.5 1 0 0 −4 1 0 0
Problem 2
(1) Wheatstone bridge’s equations
In the exercise, the goal is to determine using the same method the currents in each branch of an
electric circuit, considering Ohms law U = RI and Kirchhoffs current laws node I~ = ~0. Applied
P
to the circuit in figure 3 and taking U1 = V − VA , U2 = V − VB , U3 = VA et U4 = VB , these give us
5 equations, 2 for the currents and 3 for the voltage as in equation (4).
Figure 3: Wheatstone bridge
As we have 5 unknown current values (one per branch), we need to define a system of five linear
equations (in order to have a square matrix 5 × 5).
I1 = I + I3
I + I2 = I4
R1 I1 + RI = R2 I2 (4)
RI + R4 I4 = R3 I3
R1 I1 + R3 I3 = V
Problem 2 continued on next page. . . Page 6 of 19
Tristan Henchoz Computational Physics III : Report 2
(2) Solution for I
Putting equation (4) into matrix form and solving using Matlab gives us the equation (5) for the
unknown current I.
−V (R1R4 − R2R3)
I= (5)
RR1R2 + RR1R4 + RR2R3 + R1R2R3 + RR3R4 + R1R2R4 + R1R3R4 + R2R3R4
This equation gives us the condition on equation (6) for I = 0, which can be rewritten for a condition
on R1 as R1 = R2 · R3 /R4 . An other solution, logical, is having R going to infinity.
R1 · R4 = R2 · R3 (6)
(3) Solution for given resistances
Assuming R1 = 20kΩ, R2 = 40kΩ, R3 = 15kΩ, R4 = 30kΩ, R = 10kΩ and V = 10V, we solved
the system of equation (4) with three different methods, Matlab’s built-in LU function, and our
to implementation (with and without pivoting). This give us the solution S in equation (7). As
one can see, our implementation without pivoting didn’t work, as explained in the last part of the
Problem 1. An other interesting fact was that, according to Matlab, there wasn’t any difference
between our implementation with pivoting and its implementation. It could be interesting to solve
this problem again, but with seven equation and adding variable VA and VB into the problem, it
could change the result obtained.
0.2857 NaN 0.2857
0.1429 NaN 0.1429
SMatlab = 0.2857 Swithout pivot = NaN Swith pivot = 0.2857 (7)
0.1429 NaN 0.1429
0 NaN 0
Problem 3
The goal here is to find the stoechiometric coefficients of the following chemical reaction by rewritting
equation (8) as a system of linear equations.
n1 [FeS] + n2 [NaBiO3 ] + n3 [H2 SO4 ] −→ n4 [Bi2 (SO4 )3 ] + n5 [Fe2 (SO4 )3 ] + n6 [Na2 SO4 ] + n7 [H2 O] (8)
The system is computed in matrix form A~x = ~b as in equation (9) where each line corresponds to an
element of the chemical reaction (in order of appearance).
n1
1 0 0 0 −2 0 0 n2
1
0 1 −3 −3 −1 0
n
3
0 1 0 0 0 −2 0
~
· n4 = 0 (9)
0 1 0 −2 0 0 0
n5
0 3 4 −12 −12 −4 −1
n6
0 0 2 0 0 0 −2
n7
An exact solution ~n can’t be found using only the six equations (9), but since we are looking for a solution
corresponding to stoechiometric coefficients, it is necessary for this values to be all integers, and normally
the smallest ones. In order to obtain the desired solution, we add a seventh equation n7 = n, and then
Problem 3 continued on next page. . . Page 7 of 19
Tristan Henchoz Computational Physics III : Report 2
search for the smallest n in order that all coefficients are integers. We then found the following solution :
n1 4
n2 18
n 38
3
n4 = 9 (10)
n5 2
n6 9
n7 38
Problem 4
In this problem, the goal is to understand the impact of small deviations in the systems values on the
solution. In order to analyse it, we consider two systems of equation which can be rewritten as Ax~1 = b~1
and Ax~2 = b~2 , with A and b1,2
~ as in equation (11).
1 0 1 2 2.001
A = 1.001 1 0 b~1 = 1 b~2 = 1 (11)
0 −1 1 1 1
Using Matlabs preexisting cond function, it is possible to calculate the condition number of matrix A.
This value measures of how much a result can change based on changes in the initial problem, i.e. the
sensitivity of the system. In our case, we have 5.1998 · 103 . Solving the two systems (equation (12)), we see
that the change of 10−3 in the right-hand side vector of the system leads to a difference of approximately 1
in each element of the result. Consequently, a change of order of magnitude 103 occurred, which is the same
order of magnitude as the condition number.
−1
0
x~1 = 1
x~2 = 2.001 (12)
2 3.001
Page 8 of 19
Tristan Henchoz Computational Physics III : Report 2
Problem 5
(1) power implementation
Listing 3: A Matlab script that compute the power method
1 function [ vec , val ] = eig_power(input_matrix)
2 % eig_power: computes an eigenvector and eigenvalue of a given matrix .
3 %
4 % Arguments:
5 % input_matrix (2D complex Hermitian matrix):
6 % matrix for the eigenvalue problem ;
7 %
8 % Returns:
9 % a right eigenvector and the corresponding eigenvalue of a matrix.
10
11 % initialisation
12 vec = rand(length(input_matrix),1);
13 vec = vec/norm(vec);
14 val = vec’*input_matrix*vec;
15
16 % to enter the while
17 temp = val + 1;
18
19 epsilon = 1e-15;
20
21 while abs(val - temp) > epsilon
22 temp = val;
23 vec = input_matrix * vec;
24 vec = vec/norm(vec);
25 val = vec’*input_matrix*vec;
26 end
27 end
(2) ipower implementation
Listing 4: A Matlab script that compute the inverse power method with shift
1 function [ vec , val ] = eig_ipower( input_matrix , target )
2 % eig_ipower:
3 % computes the closest eigenvector and eigenvalue of a given matrix .
4 %
5 % Arguments:
6 %
7 % input_matrix (2D complex Hermitian matrix):
8 % matrix for the eigenvalue problem ;
9 % target (real scalar): an estimation to the eigenvalue;
10 %
11 % Returns:
12 % a right eigenvector and the corresponding eigenvalue of a matrix.
13
14 % initialisation
15 id = eye( s i z e (input_matrix));
16
17 vec = rand(length(input_matrix),1);
18 vec = vec/norm(vec);
Problem 5 [(2) ipower implementation] continued on next page. . . Page 9 of 19
Tristan Henchoz Computational Physics III : Report 2
19 val = vec’*input_matrix*vec;
20
21 % to enter the while
22 temp = val + 1;
23
24 epsilon = 1e-15;
25
26 while abs(val - temp) > epsilon
27 temp = val;
28 vec = (input_matrix - target * id) \ vec;
29 vec = vec/norm(vec);
30 val = vec’*input_matrix*vec;
31 end
32 end
(3) Rayleigh quotient implementation
Listing 5: A Matlab script that compute the Rayleigh quotient algorithm
1 function [ vec , val ] = eig_rq ( input_matrix , target )
2 % eig_rq:
3 % computes the closest eigenvector and eigenvalue of a given matrix .
4 %
5 % Arguments:
6 %
7 % input_matrix (2D complex Hermitian matrix):
8 % matrix for the eigenvalue problem ;
9 % target (real scalar): an estimation to the eigenvalue;
10 %
11 % Returns:
12 % a right eigenvector and the corresponding eigenvalue of a matrix.
13
14 i f input_matrix == diag(diag(input_matrix))
15 % case input_matrix is diagonal, avoiding singular matrix
16 [˜, i] = min(abs(diag(input_matrix) - target));
17 val = input_matrix(i,i);
18 vec = zeros(length(input_matrix), 1);
19 vec(i) = 1;
20 else
21 % initialisation
22 id = eye( s i z e (input_matrix));
23
24 vec = rand(length(input_matrix),1);
25 vec = vec/norm(vec);
26 val = target;
27
28 % to enter the loop
29 temp = target + 1;
30
31 epsilon = 1e-12;
32
33 while abs(val - temp) > epsilon
34 temp = val;
35 vec = (input_matrix - temp * id) \ vec;
36 vec = vec/norm(vec);
Problem 5 [(3) Rayleigh quotient implementation] continued on next page. . . Page 10 of 19
Tristan Henchoz Computational Physics III : Report 2
37 val = vec’*input_matrix*vec;
38 end
39 end
40 end
In order for the Rayleigh quotient algorithm to pass the test, we had to first take out diagonal
matrix because the algorithm find a correct value to quickly, resulting in a singular matrix to inverse
which is not possible (A − λ(k−1) I when A is diagonal and λ(k−1) is an exact eigenvalue).
Problem 6
As figure 4 shows the probability to find the particle at a given x. As one can see, without hopping term
there is maximum in the middle that disappears when the term appears.
Figure 4: Probability for the presence of the particle with or without hopping term
The figure 6 show the dependency of the uncertainty in term of the size and the disorder of the Hamil-
tonian.
When a disorder is applied on the Hamiltonian, the Ei value become disordered.
Problem 6 continued on next page. . . Page 11 of 19
Tristan Henchoz Computational Physics III : Report 2
Figure 5: Difference between ordered and disordered Hamiltonian on the density probability
On figure 5 is represented the different density probability given by an ordered and a disordered Hamil-
tonian. Calculating σ for both density probability gives us σ = 28.8661 for the ordered Hamiltonian and
σ = 1.9853 for the disordered one.
Figure 6: Uncertainty depending on size or disorder
Problem 7
(1) classic Jacobi method
Listing 6: A Matlab script that compute the classic Jacobi method
1 function [val] = eig_j(input_matrix)
2 % eig_j computes the eigenvalues of a matrix.
3 %
4 % Arguments:
5 % input_matrix (2D real symmetric matrix):
Problem 7 [(1) classic Jacobi method] continued on next page. . . Page 12 of 19
Tristan Henchoz Computational Physics III : Report 2
6 % matrix for the eigenvalue problem;
7 %
8 % Returns : an array with eigenvalues.
9
10 A = input_matrix;
11 N = length(A);
12 % *N^2 to have all value close to epsilon for big matrix
13 epsilon = 1e-12 * Nˆ2;
14
15 while off(A) > epsilon
16 [˜, q] = max(max(abs(triu(A,1))));
17 [˜, p] = max(abs(A(1:q-1,q)));
18 [c, s] = cs_find(A, p, q);
19
20 % J’AJ without matrix multiplication
21 temp = A(:, p);
22 A(:, p) = c * temp - s * A(:,q);
23 A(:, q) = s * temp + c * A(:,q);
24
25 temp = A(p, :)’;
26 A(p, :) = c * temp’ - s * A(q,:);
27 A(q, :) = s * temp’ + c * A(q,:);
28 end
29
30 val = diag(A);
31
32 end
33
34 function o = off(A)
35 A = A.ˆ2;
36 o = sqrt(sum(sum(A)) - trace(A));
37 end
38
39 function [c, s] = cs_find(A, p, q)
40 i f A(p, q) ˜= 0
41 tau = (A(q, q) - A(p, p)) / (2 * A(p, q));
42 i f tau >= 0
43 t = -tau + sqrt(1 + tauˆ2);
44 else
45 t = -tau - sqrt(1 + tauˆ2);
46 end
47 c = 1 / sqrt(1 + tˆ2);
48 s = t * c;
49 else
50 c = 1;
51 s = 0;
52 end
53 end
(2) cyclic Jacobi method
Listing 7: A Matlab script that compute the cyclic Jacobi method
1 function [val] = eig_cj(input_matrix)
2 % eig_cj computes the eigenvalues of a matrix.
Problem 7 [(2) cyclic Jacobi method] continued on next page. . . Page 13 of 19
Tristan Henchoz Computational Physics III : Report 2
3 %
4 % Arguments:
5 % input_matrix (2D real symmetric matrix):
6 % matrix for the eigenvalue problem;
7 %
8 % Returns : an array with eigenvalues.
9
10 A = input_matrix;
11 N = length(A);
12 % *N^2 to have all value close to epsilon for big matrix
13 epsilon = 1e-12 * Nˆ2;
14 while off(A) > epsilon
15 for p = 1:N-1
16 for q = p+1:N
17 [c, s] = cs_find(A, p, q);
18
19 % J’AJ without matrix multiplication
20 temp = A(:, p);
21 A(:, p) = c * temp - s * A(:,q);
22 A(:, q) = s * temp + c * A(:,q);
23
24 temp = A(p, :)’;
25 A(p, :) = c * temp’ - s * A(q,:);
26 A(q, :) = s * temp’ + c * A(q,:);
27 end
28 end
29 end
30
31 val = diag(A);
32
33 end
34
35 function o = off(A)
36 A = A.ˆ2;
37 o = sqrt(sum(sum(A)) - trace(A));
38 end
39
40 function [c, s] = cs_find(A, p, q)
41 i f A(p, q) ˜= 0
42 tau = (A(q, q) - A(p, p)) / (2 * A(p, q));
43 i f tau >= 0
44 t = -tau + sqrt(1 + tauˆ2);
45 else
46 t = -tau - sqrt(1 + tauˆ2);
47 end
48 c = 1 / sqrt(1 + tˆ2);
49 s = t * c;
50 else
51 c = 1;
52 s = 0;
53 end
54 end
Figures 7 and 8 show the difference of calculation between classic (j) and cyclic (cj) Jacobi
methods. As one can see, cyclic methods is way faster, even through it’s required more rotations.
This can be explained by the fact that searching the maximum is in O(N2 ) and a rotation (without
Problem 7 [(2) cyclic Jacobi method] continued on next page. . . Page 14 of 19
Tristan Henchoz Computational Physics III : Report 2
using Matrix multiplication is in O(N).
Figure 7: Comparison average number of rotations required for the diagonalisation of random
matrix between classic and cyclic Jacobi methods
Figure 8: Comparison average time required for the diagonalisation of random matrix between
classic and cyclic Jacobi methods
Problem 8
(1) matrix representations
Table 1 shows how much Bytes in memory are necessary to store an N2 ×N2 Hamiltonian matrix.
Sparse matrix clearly use less Bytes, and allow us to store bigger matrix. The figure 9 shows an
other interesting thing about sparse matrix; they are way faster to compute as N increase.
Problem 8 [(1) matrix representations] continued on next page. . . Page 15 of 19
Tristan Henchoz Computational Physics III : Report 2
N full matrix [Bytes] sparse matrix [Bytes]
30 6’480’000 77’288
100 800’000’000 873’608
1000 7’450.6 · 109 (Estimated) 87’936’008
Table 1: Memory size required to store the Hamiltonian matrix depending on N for full and sparse
matrix
Figure 9: Time needed to create an N2 ×N2 Hamiltonian matrix for full and sparse matrix
(2) bound solutions
Searching for the number of bound solutions using the Hamiltonian calculated above gives us, for
N= 100, r = 5nm and a = 5r, the result that there are 90 bound solutions. The three lowest are
given in table 2 along with their energy and their probability to find the particle inside the quantum
well. One can see that the second one is two times degenerated. The three figure 10, 11 and 12 show
their wave function and probability density. As one can see, the probability obtained in table 2 can
be verified on this figure.
Ei Energy [eV] Probability
E1 -0.4921 99.92 %
E2 -0.4799 99.80 %
E3 -0.4799 99.80 %
Table 2: Energy and probability to find the particle inside the quantum well for the three lowest
energy
Figure 10:continued
Problem 8 [(2) bound solutions] E1 wave on
function and probability
next page. .. density Page 16 of 19
Tristan Henchoz Computational Physics III : Report 2
Figure 11: E2 wave function and probability density
Figure 12: E3 wave function and probability density
(3) scattering solutions
The figure 13 show the wave function and the probability density for energy close to 1. As one
can see, the probability to find the particle inside the quantum well is very different than the one
for E1 , E2 or E3 . In fact, this probability is now 12.02%. The figure 14 shows the decency of this
probability in term of a.
Problem 8 [(3) scattering solutions] continued on next page. . . Page 17 of 19
Tristan Henchoz Computational Physics III : Report 2
Figure 13: E ≈ 1 wave function and probability density
Figure 14: Probability to find the particle inside the quantum well depending on a
Page 18 of 19
Tristan Henchoz Computational Physics III : Report 2
References
[1] Exercise sessions 67: LU decomposition and systems of linear equations. written by O. Yazyev, D.
Pasquier, M. Pizzochero, R. Fournier in 2017-2018
[2] Exercise sessions 8-9: Diagonalization algorithms and eigenvalue problems written by O. Yazyev, D.
Pasquier, M. Pizzochero, R. Fournier in 2017-2018
Page 19 of 19