Linalg Elia
Linalg Elia
Elia Besmer
1.2 Examples
1 3
• Vector Addition: v = ,w=
2 −1
1+3 4
v+w = = .
2 + (−1) 1
3
• Scalar Multiplication: c = −1, w =
−1
−1 · 3 −3
cw = = .
−1 · (−1) 1
1
1.3 Linear Combinations
A linear combination of vectors v, w ∈ Rn is:
c1 v + c2 w, c1 , c2 ∈ R.
v · w = v1 w1 + v2 w2 .
Vector Length: √
∥v∥ = v · v.
2.2 Examples
4 −1
• Dot Product: v = ,w =
2 2
3 Matrices
3.1 Definitions
A matrix is a rectangular array of numbers A = (aij ) with m rows and n columns. For
example:
2 −1 0
A= 3 − 32 6 .
−1.23 0 10
2
3.2 Operations
Matrix Addition: For A = (aij ), B = (bij ):
A + B = (aij + bij ).
Scalar Multiplication: For scalar β:
βA = (βaij ).
Matrix Multiplication: For A (m × n) and B (n × p):
n
X
(AB)ij = aik bkj .
k=1
3.3 Examples
1 3 −2 −3
• Matrix Multiplication: A = ,B =
0 2 1 −4
1(−2) + 3(1) 1(−3) + 3(−4) 1 −15
AB = = .
0(−2) + 2(1) 0(−3) + 2(−4) 2 −8
Conclusion
Linear algebra provides tools to solve complex systems, represent geometric transforma-
tions, and model real-world phenomena. This summary highlights core concepts and
serves as a foundation for further exploration.
Overview
The central problem of linear algebra is solving systems of linear equations. These systems
can be expressed in matrix form as:
Ax = b,
where A is the coefficient matrix, x is the vector of unknowns, and b is the right-hand
side vector. Systems are categorized as:
1. Homogeneous: b = 0, with at least the trivial solution x = 0.
2. Non-homogeneous: b ̸= 0, potentially having unique, no, or infinitely many
solutions.
Solution Properties
A system’s solutions can be classified into three cases:
• No solutions: The system is inconsistent.
• Unique solution: The system is consistent with exactly one solution.
• Infinitely many solutions: The system is consistent but has free parameters.
3
Key Definitions
• Consistent system: Has at least one solution.
2. The first nonzero entry (pivot) in each row is to the right of the pivot in the row
above.
Gaussian Elimination
This algorithm transforms a system into REF using elementary row operations. The
steps are:
3. Solve for the variables starting from the last row (back-substitution).
4
Inverse Matrices
A square matrix A is invertible if there exists A−1 such that:
AA−1 = A−1 A = I.
Properties
• (A−1 )−1 = A.
• (AB)−1 = B −1 A−1 .
• (AT )−1 = (A−1 )T .
LU Decomposition
Any square matrix A can be decomposed as A = LU , where:
• L is a lower triangular matrix.
• U is an upper triangular matrix.
Algorithm
1. Perform Gaussian elimination to form U .
2. Extract the multipliers used for elimination into L.
3. Verify: A = LU .
5
Key Theorems
Theorem 3.1 A system Ax = b is consistent if and only if rank(A) = rank([A|b]).
Fields
Definition of a Field
A field K is a set with two operations, addition (+) and multiplication (·), satisfying the
following axioms:
• Addition Axioms:
– (A1) Commutativity: α + β = β + α
– (A2) Associativity: (α + β) + γ = α + (β + γ)
– (A3) Identity: α + 0 = α
– (A4) Inverse: α + (−α) = 0
• Multiplication Axioms:
– (M1) Commutativity: α · β = β · α
– (M2) Associativity: (α · β) · γ = α · (β · γ)
– (M3) Identity: α · 1 = α
– (M4) Inverse: For α ̸= 0, α · α−1 = 1
• Distributive Law: (α + β) · γ = α · γ + β · γ
Examples
• Real numbers R: 2 + 3 = 5, 2 · 3 = 6.
• Rational numbers Q: 1
2
+ 1
3
= 56 , 1
2
· 2
3
= 13 .
Vector Spaces
Definition of a Vector Space
A vector space V over a field K is a set equipped with two operations:
• Vector addition: + : V × V → V
• Scalar multiplication: · : K × V → V
6
• (A1) u + v = v + u (commutativity)
• (A2) (u + v) + w = u + (v + w) (associativity)
• (A3) v + 0 = v (identity)
• (A4) v + (−v) = 0 (inverse)
• (V1) α(u + v) = αu + αv
• (V2) (α + β)v = αv + βv
• (V3) (αβ)v = α(βv)
• (V4) 1 · v = v
Examples
• R2 : (1, 2) + (3, 4) = (4, 6), 2 · (1, 2) = (2, 4).
1 2 5 6 6 8
• M2,2 (R): + = .
3 4 7 8 10 12
• R[x]≤2 : (1 + x) + (x2 − x) = 1 + x2 , 2 · (1 + x) = 2 + 2x.
Subspaces
Definition of a Subspace
A subset W ⊆ V is a subspace if:
• 0∈W
• u + v ∈ W for all u, v ∈ W
• αu ∈ W for all u ∈ W and α ∈ K
Examples
• R2 : The subset {(x, 2x) : x ∈ R} is a subspace.
a b
• M2,2 (R): The subset of upper triangular matrices is a subspace.
0 c
1 2 −2
• Null space of A = : Solutions to Ax = 0, e.g., x = .
3 6 1
7
Examples
• Vectors {(1, 0), (0, 1)} in R2 are linearly independent because α1 (1, 0) + α2 (0, 1) =
(0, 0) implies α1 = α2 = 0.
• Vectors {(1, 2), (2, 4)} in R2 are linearly dependent because 2(1, 2) − 1(2, 4) = (0, 0).
Examples
• Standard basis of R3 : {(1, 0, 0), (0, 1, 0), (0, 0, 1)}.
1 0 0 1 0 0 0 0
• Basis of M2,2 (R): , , , .
0 0 0 0 1 0 0 1
Fundamental Subspaces
1 2
For a matrix A = 3 4:
5 6
1 2
• Column space: C(A) = span 3 , 4.
5 6
−2 −2
• Null space: N (A) includes vectors such as satisfying A = 0.
1 1
Rank-Nullity Theorem
rank(A) + nullity(A) = 2
Spanning Sets
Definition of a Spanning Set
A set of vectors {v1 , . . . , vn } spans V if every v ∈ V can be written as a linear combination
of v1 , . . . , vn .
Examples
• R3 : The vectors {(1, 0, 0), (0, 1, 0), (0, 0, 1)} span R3 .
1 0 0 1 0 0 0 0
• M2,2 (R): The matrices , , , span M2,2 (R).
0 0 0 0 1 0 0 1
8
4 Orthogonality
4.1 Dot Product in Rn
The dot product (or inner product) of two vectors x, y ∈ Rn is defined as:
n
X
x·y = x i yi . (1)
i=1
• Commutative Law: x · y = y · x.
• Distributive Law: (x + y) · z = x · z + y · z.
x · y = 1 · 4 + 2 · 5 + 3 · 6 = 32. (2)
Example: The floor of a room (extended to infinity) and the line where two walls meet
are orthogonal subspaces in R3 .
5 Orthogonal Complements
• The orthogonal complement S ⊥ of a subspace S is the set of all vectors orthog-
onal to every vector in S.
9
6 Orthogonal Projections
• Any vector x ∈ Rn can be uniquely decomposed as x = p + o, where p ∈ V and
o ∈ V ⊥.
7 Gram-Schmidt Process
• Converts a basis {x1 , x2 , . . . , xn } into an orthogonal (or orthonormal) basis {v1 , v2 , . . . , vn }.
• Formula:
v1 = x1 , (5)
⟨x2 , v1 ⟩
v2 = x2 − v1 , (6)
⟨v1 , v1 ⟩
k−1
X ⟨xk , vi ⟩
vk = xk − vi . (7)
i=1
⟨vi , vi ⟩
8 Change of Coordinates
• Coordinates of a vector v with respect to a basis {v1 , v2 , . . . , vn } are given by
⟨v,vi ⟩
ci = ⟨v i ,vi ⟩
.
Example: Find the coordinates of v = (1, 2) with respect to a basis {(1, 0), (0, 1)}.
9 Determinants of Matrices
The determinant of a square matrix A, denoted as det(A) or |A|, is a scalar value. The
general definition involves the following cases:
• 1x1 Matrix: For A = [a11 ], det(A) = a11 .
• 2x2 Matrix:
a11 a12
det(A) = = a11 a22 − a12 a21 .
a21 a22
10
• 3x3 Matrix:
det(A) = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a12 a21 a33 − a11 a23 a32 .
2 1
Example: For A = , det(A) = 2 · 4 − 1 · 3 = 8 − 3 = 5.
3 4
10 Properties of Determinants
Key properties of determinants include:
1. Row/Column Operations:
• Multiplying a row/column by a scalar k: det(B) = k · det(A).
• Adding a scalar multiple of one row to another does not change the determi-
nant.
• Interchanging two rows or columns changes the sign of the determinant: det(B) =
− det(A).
2. det(A · B) = det(A) · det(B).
3. det(A−1 ) = 1
det(A)
, if A is invertible.
4. det(A⊤ ) = det(A).
5. If a matrix A is singular, det(A) = 0.
12 Cofactor Expansion
The determinant can be calculated using cofactor expansion along any row or column.
For row i: n
X
det(A) = (−1)i+j aij Mij .
j=1
3 −2 0
Example: For A = 1
0 1, expand along the first row:
−2 3 0
0 1 1 1
det(A) = 3 − (−2) .
3 0 −2 0
Calculation yields det(A) = −5.
11
13 Special Matrices
• The determinant of a diagonal or triangular matrix is the product of its diagonal
entries.
det(Ai )
xi = ,
det(A)
where Ai is obtained
by replacing the ith column of A with b.
2x + z = 1
Example: Solve y − 2z = 0 .
x + y + z = −1
2 0 1
det(A) = 0 1 −2 = 5,
1 1 1
1 0 1
det(A1 ) = 0 1 −2 = 4.
−1 1 1
−6 −3
Thus, x = 54 , y = 5
,z = 5
.
12
Eigenspace
The eigenspace of A corresponding to an eigenvalue λ is defined as:
Example
2 0
Consider A = . The characteristic equation is:
0 3
2−λ 0
det(A − λI) = det
0 3−λ
= (2 − λ)(3 − λ) = 0.
1 0
The eigenvalues are λ1 = 2 and λ2 = 3. Eigenvectors are v1 = and v2 = .
0 1
Diagonalization
A matrix A is diagonalizable if there exists an invertible matrix S and a diagonal matrix
Λ such that:
A = SΛS −1 , (12)
where the columns of S are eigenvectors of A, and Λ contains the corresponding eigen-
values on its diagonal.
Example
4 1
Consider A = . The characteristic equation is:
2 3
4−λ 1
det(A − λI) = det
2 3−λ
= (4 − λ)(3 − λ) − 2 = λ2 − 7λ + 10.
13
The eigenvalues
areλ1 = 5 and λ2 = 2. Solving (A − λI)v = 0, we find eigenvectors
1 −1
v1 = and v2 = . Thus:
2 1
1 −1 5 0
S= , Λ= .
2 1 0 2
• Orthonormal eigenvectors.
It can be diagonalized as:
A = SΛS ⊤ , (13)
where S is an orthogonal matrix.
Key Theorems
Linear Independence of Eigenvectors
If λ1 , λ2 , . . . , λr are distinct eigenvalues of A, their corresponding eigenvectors v1 , v2 , . . . , vr
are linearly independent.
Diagonalizability Criterion
An n × n matrix is diagonalizable if and only if it has n linearly independent eigenvectors.
15 Linear Transformations
15.1 Definition
A function T : U → V between vector spaces over the same field K is called a linear
transformation if for all u1 , u2 ∈ U and α, β ∈ K:
14
15.2 Properties
Let T : U → V be a linear map:
• T (0U ) = 0V .
• T (−u) = −T (u) for all u ∈ U .
15.3 Examples
2 1 x
1. Let U = R2 , V = R2 , and let T be defined by the matrix A = . For x = 1 ,
0 3 x2
the map T (x) = Ax becomes:
x1 2 1 x1 2x1 + x2
T = = .
x2 0 3 x2 3x2
16 Isomorphisms
16.1 Definition
A linear map T : U → V is an isomorphism if it is a bijection. Two vector spaces U
and V are isomorphic if there exists an isomorphism T : U → V .
16.3 Example
2 2 1 2
Let T : R → R be defined by T (x) = Ax, where A = . To check if T is an
3 4
isomorphism, we compute det(A):
det(A) = 1 · 4 − 2 · 3 = −2 ̸= 0.
Since det(A) ̸= 0, T is invertible, and thus an isomorphism.
15
17.2 Action on Coordinates
Let u ∈ U with coordinates [u]U relative to a basis of U . Then T (u) has coordinates
[T (u)]V satisfying:
[T (u)]V = A[u]U .
17.3 Example
x1
3 2 1 0 −1
Let T : R → R be defined by the matrix A = . For x = x2 , we compute:
2 1 3
x3
x1 x1
1 0 −1 x1 − x3
T x2 = x2 = .
2 1 3 2x1 + x2 + 3x3
x3 x3
18.3 Composition
For T1 : U → V and T2 : V → W , define:
T1 (x) = A1 x, T2 (x) = A2 x,
16
where:
1 0 0 1
A1 = , A2 = .
2 1 −1 0
The composition T2 T1 corresponds to:
0 1 1 0 2 1
A2 A1 = = .
−1 0 2 1 −1 0
x1 − x3 = 0,
2x1 + x2 + 3x3 = 0.
From the first equation, x1 = x3 . Substituting x1 = x3 into the second equation gives:
Step 2: Image (Find Span of Columns of A) To find im(T ), take the columns of
A:
1 0 −1
Columns of A = , , .
2 1 3
Check for linear independence using the determinant of the 2 × 2 submatrices. We find
that the first two columns are linearly independent, so:
1 0
im(T ) = span , .
2 1
17
20 Invertibility and Isomorphisms
20.1 Invertibility Conditions
For T : U → V with dim(U ) = dim(V ) = n, the following are equivalent:
• T is bijective.
• ker(T ) = {0}.
• rank(T ) = n.
20.3 Example
2 1
Let A = . To compute A−1 , we find:
1 1
det(A) = 2 · 1 − 1 · 1 = 1.
Thus, A is invertible, and its inverse is given by:
−1 1 1 −1 1 −1
A = = .
det(A) −1 2 −1 2
Matrix Decompositions
Eigendecomposition
A real symmetric matrix M can be decomposed as:
M = SΛS −1 , (14)
where S is an orthogonal matrix (columns are eigenvectors) and Λ is a diagonal matrix
(eigenvalues). This decomposition applies only to symmetric matrices.
For example, let:
Its eigenvalues are λ1 = 5 and λ2 = 2, with eigenvectors v1 = 1 1 and v2 = −1 1 .
Thus,
18
Properties of SVD:
• The non-zero eigenvalues of AA⊤ and A⊤ A are the squared singular values of A.
Example: For A = 1 2 3 4 5 6 ,
√
The eigenvalues
√ are λ 1 ≈ 91.14 and λ 2 ≈ 0.86. The singular values are σ1 ≈ 91.14 and
σ2 ≈ 0.86.
Applications of SVD
• Low-rank matrix approximation: useful for image compression, recommender sys-
tems, and data reduction.
Key Steps:
19
Applications of PCA
• Dimensionality reduction in machine learning.
• Noise filtering.
Variance Explained:
λi
Proportion of variance explained by component i = P . (16)
j λj
20