Basics of Linear Algebra
Vector: Quantum mechanics
• The standard quantum mechanical notation for a vector in a vector
space is the following:
|ψ ⟩
ψ is a label for the vector.
• The |· ⟩ notation is used to indicate that the object is a vector. The
entire object |ψ ⟩ is sometimes called a ket.
• vector space also contains a special zero vector, which we denote by
0. It satisfies the property that for any other vector |v ⟩, |v ⟩ + 0 = |v ⟩
Basics of Vector
A column vector (or simply vector) v of dimension (or size) n is a collection
of n complex numbers (v1,v2,…,vn) arranged as a column:
The norm (Euclidean Norm or 2-norm) of a vector v is defined as σ𝑖 |𝑣𝑖 |2
The norm of a vector, often denoted as ||v|| or |v|, is a measure of the "length" or
"size" of the vector.
Basics of Vector
A vector is said to be of unit norm (or alternatively it is called a unit vector) if its norm
is 1.The adjoint of a vector v is denoted as 𝑣 † and is defined to be the following row
vector where ∗ denotes the complex conjugate
A conjugate matrix is a complex matrix which all its elements have been replaced by
their complex conjugates, that is, the sign of the imaginary part of all its complex
numbers have been changed.
Conjugate matrix
Matrix 𝐴ഥ is the conjugate of matrix A, since all entries of matrix 𝐴ഥ are
conjugated. In other words, the numbers in matrix 𝐴ഥ have the same real part as
numbers in matrix A, but their complex part have the opposite sign.
Transpose of a matrix
The transpose of a matrix is found by interchanging its rows into
columns or columns into rows.
The transpose of the matrix is denoted by using the letter “T” in the
superscript of the given matrix.
For example, if “A” is the given matrix, then the transpose of the matrix is
represented by A' or AT.
Dirac‘s notation
Notation like ‘| ⟩ ’ is called the Dirac notation, and we’ll be seeing it
often, as it’s the standard notation for states in quantum mechanics.
The difference between bits and qubits is that a qubit can be in a state
other than |0 ⟩ or |1 ⟩.
It is also possible to form linear combinations of states, often called
superpositions:
|ψ⟩ = α |0⟩ + β |1⟩
The numbers α and β are complex numbers.
• The special states |0 ⟩ and |1 ⟩ are known as computational basis
states, and form an orthonormal basis for this vector space.
Bra-Ket notation
Bra-ket notation is a standard notation for describing quantum states in
the theory of quantum mechanics composed of angle brackets and
vertical bars.
It can also be used to denote abstract vectors and linear functionals in
mathematics.
It is so called because the inner product (or dot product) of two states
is denoted by a bracket, ⟨Φ|Ψ⟩ , consisting of a left part, ⟨Φ| , called
the bra, and a right part, |Ψ⟩ , called the ket.
The notation was introduced in 1939 by Paul Dirac, and is also known
as Dirac notation.
Inner product
Two vectors can be multiplied together through the inner product, also
known as a dot product or scalar product.
As the name implies, the result of the inner product of two vectors is a
scalar.
The inner product gives the projection of one vector onto another and is
invaluable in describing how to express one vector as a sum of other
simpler vectors.
The dot product of two vectors a=[a1,a2,..,an] and b=[b1,b2,…,bn],
specified with respect to an orthonormal basis, is defined as:
a.b=σ𝑛𝑖=1 𝑎𝑖 𝑏𝑖 = 𝑎1 𝑏1 + 𝑎2 𝑏2 +…+ 𝑎𝑛 𝑏𝑛
Inner product
The inner product between two column vectors
u=(u1,u2,…,un) and v=(v1,v2,…,vn)
denoted ⟨u,v⟩ is defined as
This notation also allows the norm of a vector v to be written as ⟨v,v⟩
Inner product
A vector can be multiplied with a number c to form a new vector whose entries are
multiplied by c.
You can also add two vectors u and v to form a new vector whose entries are the sum
of the entries of u and v. These operations are the following:
Inner product
Multiply two matrices M of dimension m×n and N of dimension n×p to get a new
matrix P of dimension m×p as follows:
where the entries of P are 𝑃𝑖𝑘 = σ𝑗 𝑀𝑖𝑗 𝑁𝑗𝑘
Inner product
For any matrix M, the adjoint or conjugate transpose of M is a
matrix N such that Nij=𝑀𝑗𝑖∗ .
The adjoint of M is usually denoted M†.
A matrix U is unitary if UU†=U†U=1 or equivalently, U−1=U†.
One important property of unitary matrices is that they preserve the norm
of a vector.
This happens because
⟨v,v⟩=v†v=v†U−1Uv=v†U†Uv=⟨Uv,Uv⟩.
A matrix M is said to be Hermitian if M=M†
Outer product
In linear algebra, the outer product of two coordinate vectors is a matrix. If the two
vectors have dimensions n and m, then their outer product is an n × m matrix.
Inner product-<ΨΦ>
A product of two quantum states bra Psi <Ψ| and ket Phi |Φ> is called
an inner product, producing a value. An inner product is also called
an overlap, the overlap between quantum states.
Outer Product — |ΨΦ|
A product of two quantum states, ket Psi|Ψ> and bra Phi <Φ|, is called
an outer product, producing a matrix. An outer product is also called
a projection.
•Use the inner product when you need a single scalar value that
represents the "closeness" or "alignment" of two vectors.
•Use the outer product when you need to create a matrix that captures
the relationships between the elements of two vectors.
•The inner product is used to measure the angle between two vectors,
compute projections, and is central to many concepts in linear algebra
and geometry.
•The outer product is used in various applications, such as matrix
factorization, computing the covariance matrix, and in the context of
tensor products.
Tensor Products
Another important operation is the Kronecker product, also called
the matrix direct product or tensor product.
Note that the Kronecker product is distinguished from matrix
multiplication, which is an entirely different operation.
In quantum computing theory, tensor product is commonly used
to denote the Kronecker product.
Tensor Products
𝑐
𝑎
Consider the two vectors v= and u= 𝑑
𝑏
𝑒
Their tensor product is denoted as v⊗u and results in a block
matrix.
Tensor Products
Notice that tensor product is an operation on two matrices or vectors of arbitrary
size.
The tensor product of two matrices M of size m×n and N of size p×q is a larger
matrix P=M⊗N of size mp×nq, and is obtained from M and N as follows:
Tensor Products
Tensor Products
A final useful notational convention surrounding tensor products is that, for any
vector v or matrix M, v⊗n or M⊗n is short hand for an n-fold repeated tensor
product. For example:
Tensor Product — |ΨΦ>
A product of two quantum states, ket Psi|Ψ> and ket Phi |Φ> is called
a tensor product, producing a column vector with length 2ⁿ (where n is
the number of qubits).
Tensor Product — |ΨΦ>
A product of two quantum states, ket Psi|Ψ> and ket Phi |Φ> is called
a tensor product, producing a column vector with length 2ⁿ (where n is
the number of qubits).
If we square all individual vector elements and sum them up,
the total must be 1.
Summary of Different Products
•Use the dot product when you need a scalar result, and you are interested in the
magnitude of one vector projected onto another.
•Use the cross product when you need a vector result that is perpendicular to the
plane formed by two vectors.
•The tensor product is used in linear algebra to create higher-dimensional
structures. It's often used in the context of transformations between vector spaces
and in quantum mechanics, where it represents operations on composite systems..
LINEAR VECTOR SPACES AND SUBSPACES
▪ Definition: A (linear) vector space is a nonempty set V of
objects, called vectors, on which are defined two operations,
called addition and multiplication by scalars (real numbers),
subject to the ten axioms (or rules) listed below. The axioms
must hold for all vectors u, v, and w in V and for all scalars
c and d.
1. The sum of u and v, denoted by u + v , is in V.
2. u + v = v + u . commutativity
3. (u + v) + w = u + (v + w) . associativity
4. There is a zero vector 0 in V such that
u + (− u) = 0 . Identity element
VECTOR SPACES AND SUBSPACES
5. For each u in V, there is a vector −u in V such
that u + ( − u) = 0 . Inverse elements
6. The scalar multiple of u by c, denoted by cu, is
in V.
7. c(u + v) = cu + cv. Distributivity vector mul
8. (c + d)u = cu + du. Distributivity scalar mul
9. c( du) = (cd )u . Compatibility
10. 1u = u . multiplicative identity
Complex vector space
A complex vector space is one in which the scalars are complex numbers. Thus, if v1, v2,
…, vm are vectors in a complex vector space, then a linear combination is of the form
c1v1+c2v2+….+cmvm
where the scalars c1, c2, …, cm are complex numbers. The complex version of Rn is the
complex vector space Cn consisting of ordered n-tuples of complex numbers. Thus, a
vector in Cn has the form
v=(a1+ b1i, a2 + b2i,…,an + bni)
It is also convenient to represent vectors in Cn by column matrices of the form
𝑎1 + 𝑏1 𝑖
𝑎2 + 𝑏2 𝑖
𝑣= .
.
𝑎𝑛 + 𝑏𝑛 𝑖
Complex vector space: example
Complex vector spaces: Properties
Hilbert spaces
A Hilbert space H is a real or complex inner product space that is also a complete
metric space with respect to the distance function induced by the inner product.
To say that H is a complex inner product space means that H is a complex vector
space on which there is an inner product ⟨x,y⟩ associating a complex number to
each pair of elements x,y of H that satisfies the following properties:
1. The inner product is conjugate symmetric; that is, the inner product of a
pair of elements is equal to the complex conjugate of the inner product
of the swapped elements:
⟨y,x⟩= ⟨x,y⟩
This implies that ⟨x,x⟩ is a real number.
Hilbert spaces
2. The inner product of an element with itself is positive definite
⟨x,x⟩ > 0 if x ≠ 0,
⟨x,x⟩ = 0 if x = 0.
The state of a vibrating string can be modeled as a point
in a Hilbert space. The decomposition of a vibrating string
into its vibrations in distinct overtones is given by the
projection of the point onto the coordinate axes in the
space.
Notations
Spanning Set
A spanning set for a vector space is a set of vectors | 𝑣1 ⟩, . . . , | 𝑣𝑛 ⟩ such
that any vector |v⟩ in the vector space can be written as a linear
combination |v⟩ = σ𝑖 𝑎𝑖 |𝑣𝑖 ⟩ of vectors in that set. For example, a
spanning set for the vector space C2 is the set
1 0
|𝑣1 ⟩≡ ; |𝑣2 ⟩≡ since any vector
0 1
𝑎1
|v⟩≡ 𝑎
2
in C2 can be written as a linear combination |v ⟩ = 𝑎1 | 𝑣1 ⟩ + 𝑎2 | 𝑣2 ⟩ of
the vectors |𝑣1 ⟩ and |𝑣2 ⟩. We say that the vectors |𝑣1 ⟩ and |𝑣2 ⟩ span the
vector space C2.
Trace of a matrix
The trace of A is defined to be the sum of its diagonal elements,
The trace is easily seen to be cyclic, tr(AB) = tr(BA), and linear, tr(A + B) = tr(A)+tr(B),
tr(zA) = z tr(A), where A and B are arbitrary matrices, and z is a complex
number.
Pauli matrices
• Four extremely useful matrices which we shall often have occasion to
use are the Pauli matrices.
• These are 2 by 2 matrices, which go by a variety of notations.