Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
18 views20 pages

Linalg Elia

This document provides a comprehensive overview of linear algebra, covering key concepts such as vectors, matrices, and their operations, as well as systems of linear equations and their solutions. It explains fundamental definitions, properties, and theorems related to vector spaces, linear independence, and orthogonality. The material serves as a foundational resource for further study in linear algebra applications across various fields.

Uploaded by

luisaellamaria4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views20 pages

Linalg Elia

This document provides a comprehensive overview of linear algebra, covering key concepts such as vectors, matrices, and their operations, as well as systems of linear equations and their solutions. It explains fundamental definitions, properties, and theorems related to vector spaces, linear independence, and orthogonality. The material serves as a foundational resource for further study in linear algebra applications across various fields.

Uploaded by

luisaellamaria4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Linear Algebra Summary

Elia Besmer

Introduction to Linear Algebra


Linear Algebra focuses on solving systems of linear equations, analyzing geometric
phenomena (rotations, reflections), and underpinning computational methods. Example
applications include engineering, data modeling, traffic flow, and astronomy.

1 Vectors and Linear Combinations


1.1 Definitions
A vector in Rn is an ordered tuple of real numbers:
 
v1
 v2 
v =  ..  .
 
.
vn
   
v1 w1
Vector Addition: For v = and w = :
v2 w2
 
v1 + w1
v+w = .
v2 + w2
Scalar Multiplication: For scalar c ∈ R:
 
cv1
cv = .
cv2

1.2 Examples
   
1 3
• Vector Addition: v = ,w=
2 −1
   
1+3 4
v+w = = .
2 + (−1) 1
 
3
• Scalar Multiplication: c = −1, w =
−1
   
−1 · 3 −3
cw = = .
−1 · (−1) 1

1
1.3 Linear Combinations
A linear combination of vectors v, w ∈ Rn is:

c1 v + c2 w, c1 , c2 ∈ R.

2 Lengths and Dot Products


2.1 Definitions
   
v1 w1
Dot Product: For v = ,w= :
v2 w2

v · w = v1 w1 + v2 w2 .

Vector Length: √
∥v∥ = v · v.

2.2 Examples
   
4 −1
• Dot Product: v = ,w =
2 2

v · w = 4(−1) + 2(2) = 0 (orthogonal vectors).


 
−1
• Length: v =
2
p √
∥v∥ = (−1)2 + 22 = 5.

2.3 Angle Between Vectors


The angle θ between vectors u, v is:
u·v
cos θ = .
∥u∥∥v∥

3 Matrices
3.1 Definitions
A matrix is a rectangular array of numbers A = (aij ) with m rows and n columns. For
example:  
2 −1 0
A= 3 − 32 6  .
−1.23 0 10

2
3.2 Operations
Matrix Addition: For A = (aij ), B = (bij ):
A + B = (aij + bij ).
Scalar Multiplication: For scalar β:
βA = (βaij ).
Matrix Multiplication: For A (m × n) and B (n × p):
n
X
(AB)ij = aik bkj .
k=1

3.3 Examples

  
1 3 −2 −3
• Matrix Multiplication: A = ,B =
0 2 1 −4
   
1(−2) + 3(1) 1(−3) + 3(−4) 1 −15
AB = = .
0(−2) + 2(1) 0(−3) + 2(−4) 2 −8

Conclusion
Linear algebra provides tools to solve complex systems, represent geometric transforma-
tions, and model real-world phenomena. This summary highlights core concepts and
serves as a foundation for further exploration.

Overview
The central problem of linear algebra is solving systems of linear equations. These systems
can be expressed in matrix form as:
Ax = b,
where A is the coefficient matrix, x is the vector of unknowns, and b is the right-hand
side vector. Systems are categorized as:
1. Homogeneous: b = 0, with at least the trivial solution x = 0.
2. Non-homogeneous: b ̸= 0, potentially having unique, no, or infinitely many
solutions.

Solution Properties
A system’s solutions can be classified into three cases:
• No solutions: The system is inconsistent.
• Unique solution: The system is consistent with exactly one solution.
• Infinitely many solutions: The system is consistent but has free parameters.

3
Key Definitions
• Consistent system: Has at least one solution.

• Trivial solution: The zero solution for homogeneous systems.

• Rank: The number of linearly independent rows in a matrix.

Elementary Row Operations


To solve systems of equations, we use the following operations:

• (R1) Add a multiple of one row to another.

• (R2) Interchange two rows.

• (R3) Multiply a row by a nonzero scalar.

Row Echelon Form (REF)


A matrix is in REF if:

1. All zero rows are at the bottom.

2. The first nonzero entry (pivot) in each row is to the right of the pivot in the row
above.

Reduced Row Echelon Form (RREF)


In addition to REF properties, RREF satisfies:

1. All pivots are 1.

2. Each pivot is the only nonzero entry in its column.

Gaussian Elimination
This algorithm transforms a system into REF using elementary row operations. The
steps are:

1. Form the augmented matrix [A|b].

2. Use row operations to simplify A into REF.

3. Solve for the variables starting from the last row (back-substitution).

Example: Solve the system


2x + y = 1,
4x + 2y = 1.
Solution: Subtract twice the first equation from the second to get 0 = −1, indicating no
solutions.

4
Inverse Matrices
A square matrix A is invertible if there exists A−1 such that:

AA−1 = A−1 A = I.

Properties
• (A−1 )−1 = A.
• (AB)−1 = B −1 A−1 .
• (AT )−1 = (A−1 )T .

Example: Compute the inverse of


 
1 2
A= .
3 4

LU Decomposition
Any square matrix A can be decomposed as A = LU , where:
• L is a lower triangular matrix.
• U is an upper triangular matrix.

Algorithm
1. Perform Gaussian elimination to form U .
2. Extract the multipliers used for elimination into L.

Example: Decompose the matrix


 
2 1 1
A =  4 −6 0 .
−2 7 2
Solution:
1. Perform row operations to form U :
 
2 1 1
U = 0 −8 −2 .
0 0 1

2. Extract L using the multipliers:


 
1 0 0
L= 2 1 0 .
−1 −1 1

3. Verify: A = LU .

5
Key Theorems
Theorem 3.1 A system Ax = b is consistent if and only if rank(A) = rank([A|b]).

Theorem 3.2 The number of free variables in Ax = b is n − rank(A), where n is the


number of variables.

Fields
Definition of a Field
A field K is a set with two operations, addition (+) and multiplication (·), satisfying the
following axioms:

• Addition Axioms:

– (A1) Commutativity: α + β = β + α
– (A2) Associativity: (α + β) + γ = α + (β + γ)
– (A3) Identity: α + 0 = α
– (A4) Inverse: α + (−α) = 0

• Multiplication Axioms:

– (M1) Commutativity: α · β = β · α
– (M2) Associativity: (α · β) · γ = α · (β · γ)
– (M3) Identity: α · 1 = α
– (M4) Inverse: For α ̸= 0, α · α−1 = 1

• Distributive Law: (α + β) · γ = α · γ + β · γ

Examples
• Real numbers R: 2 + 3 = 5, 2 · 3 = 6.

• Rational numbers Q: 1
2
+ 1
3
= 56 , 1
2
· 2
3
= 13 .

• Finite field F5 : 3 + 4 ≡ 2 mod 5, 3 · 4 ≡ 2 mod 5.

Vector Spaces
Definition of a Vector Space
A vector space V over a field K is a set equipped with two operations:

• Vector addition: + : V × V → V

• Scalar multiplication: · : K × V → V

satisfying the following axioms for all u, v, w ∈ V and α, β ∈ K:

6
• (A1) u + v = v + u (commutativity)
• (A2) (u + v) + w = u + (v + w) (associativity)
• (A3) v + 0 = v (identity)
• (A4) v + (−v) = 0 (inverse)
• (V1) α(u + v) = αu + αv
• (V2) (α + β)v = αv + βv
• (V3) (αβ)v = α(βv)
• (V4) 1 · v = v

Examples
• R2 : (1, 2) + (3, 4) = (4, 6), 2 · (1, 2) = (2, 4).
     
1 2 5 6 6 8
• M2,2 (R): + = .
3 4 7 8 10 12
• R[x]≤2 : (1 + x) + (x2 − x) = 1 + x2 , 2 · (1 + x) = 2 + 2x.

Subspaces
Definition of a Subspace
A subset W ⊆ V is a subspace if:
• 0∈W
• u + v ∈ W for all u, v ∈ W
• αu ∈ W for all u ∈ W and α ∈ K

Examples
• R2 : The subset {(x, 2x) : x ∈ R} is a subspace.
 
a b
• M2,2 (R): The subset of upper triangular matrices is a subspace.
0 c
   
1 2 −2
• Null space of A = : Solutions to Ax = 0, e.g., x = .
3 6 1

Linear Independence and Bases


Linear Independence
A set of vectors {v1 , . . . , vn } is linearly independent if the only solution to
α1 v1 + α2 v2 + · · · + αn vn = 0
is α1 = α2 = · · · = αn = 0.

7
Examples
• Vectors {(1, 0), (0, 1)} in R2 are linearly independent because α1 (1, 0) + α2 (0, 1) =
(0, 0) implies α1 = α2 = 0.
• Vectors {(1, 2), (2, 4)} in R2 are linearly dependent because 2(1, 2) − 1(2, 4) = (0, 0).

Bases and Dimension


A basis of V is a linearly independent set that spans V . If V has a basis with n vectors,
then dim(V ) = n.

Examples
• Standard basis of R3 : {(1, 0, 0), (0, 1, 0), (0, 0, 1)}.
       
1 0 0 1 0 0 0 0
• Basis of M2,2 (R): , , , .
0 0 0 0 1 0 0 1

Fundamental Subspaces
 
1 2
For a matrix A = 3 4:
5 6
   
1 2
• Column space: C(A) = span   3 , 4.
 
5 6
   
−2 −2
• Null space: N (A) includes vectors such as satisfying A = 0.
1 1

• Row space: C(AT ) = span 1 2 , 3 4 .


 

Rank-Nullity Theorem
rank(A) + nullity(A) = 2

Spanning Sets
Definition of a Spanning Set
A set of vectors {v1 , . . . , vn } spans V if every v ∈ V can be written as a linear combination
of v1 , . . . , vn .

Examples
• R3 : The vectors {(1, 0, 0), (0, 1, 0), (0, 0, 1)} span R3 .
       
1 0 0 1 0 0 0 0
• M2,2 (R): The matrices , , , span M2,2 (R).
0 0 0 0 1 0 0 1

8
4 Orthogonality
4.1 Dot Product in Rn
The dot product (or inner product) of two vectors x, y ∈ Rn is defined as:
n
X
x·y = x i yi . (1)
i=1

Properties of the Dot Product:

• Positivity: x · x ≥ 0, and x · x = 0 only if x = 0.

• Commutative Law: x · y = y · x.

• Distributive Law: (x + y) · z = x · z + y · z.

• Scalar Multiplication: (cx) · y = c(x · y), for c ∈ R.

Example: Let x = (1, 2, 3) and y = (4, 5, 6). Then,

x · y = 1 · 4 + 2 · 5 + 3 · 6 = 32. (2)

4.2 Orthogonal Vectors


• Two vectors x, y ∈ Rn are orthogonal if x · y = 0. Denoted as x ⊥ y.

• A vector x is orthogonal to a set Y ⊂ Rn if x · y = 0 for all y ∈ Y .

Example: In R3 , the vector v = (0, 0, 1) is orthogonal to the plane z = 0 because


v · w = 0 for all w = (x, y, 0).

4.3 Orthogonal Sets and Subspaces


• A set S ⊂ V is orthogonal if v · w = 0 for all distinct v, w ∈ S.

• Two subspaces V and W are orthogonal if v · w = 0 for all v ∈ V and w ∈ W .

Example: The floor of a room (extended to infinity) and the line where two walls meet
are orthogonal subspaces in R3 .

5 Orthogonal Complements
• The orthogonal complement S ⊥ of a subspace S is the set of all vectors orthog-
onal to every vector in S.

• dim(S) + dim(S ⊥ ) = n for any subspace S ⊂ Rn .

Example: For S = {(x, 0, 0) : x ∈ R} in R3 , S ⊥ = {(0, y, z) : y, z ∈ R}.

9
6 Orthogonal Projections
• Any vector x ∈ Rn can be uniquely decomposed as x = p + o, where p ∈ V and
o ∈ V ⊥.

• p is called the orthogonal projection of x onto V .

Projection Matrix: If V = span(a), then the projection of x onto V is given by:


x·a
p= a. (3)
a·a
Example: Project b = (2, 3, 4) onto the z-axis:

p = (0, 0, 4). (4)

7 Gram-Schmidt Process
• Converts a basis {x1 , x2 , . . . , xn } into an orthogonal (or orthonormal) basis {v1 , v2 , . . . , vn }.

• Formula:

v1 = x1 , (5)
⟨x2 , v1 ⟩
v2 = x2 − v1 , (6)
⟨v1 , v1 ⟩
k−1
X ⟨xk , vi ⟩
vk = xk − vi . (7)
i=1
⟨vi , vi ⟩

Example: Apply Gram-Schmidt to x1 = (1, 2, 2) and x2 = (−1, 0, 2) to obtain an


orthogonal basis.

8 Change of Coordinates
• Coordinates of a vector v with respect to a basis {v1 , v2 , . . . , vn } are given by
⟨v,vi ⟩
ci = ⟨v i ,vi ⟩
.

• Transition matrices convert coordinates between bases.

Example: Find the coordinates of v = (1, 2) with respect to a basis {(1, 0), (0, 1)}.

9 Determinants of Matrices
The determinant of a square matrix A, denoted as det(A) or |A|, is a scalar value. The
general definition involves the following cases:
• 1x1 Matrix: For A = [a11 ], det(A) = a11 .

• 2x2 Matrix:
a11 a12
det(A) = = a11 a22 − a12 a21 .
a21 a22

10
• 3x3 Matrix:
det(A) = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a12 a21 a33 − a11 a23 a32 .
 
2 1
Example: For A = , det(A) = 2 · 4 − 1 · 3 = 8 − 3 = 5.
3 4

10 Properties of Determinants
Key properties of determinants include:
1. Row/Column Operations:
• Multiplying a row/column by a scalar k: det(B) = k · det(A).
• Adding a scalar multiple of one row to another does not change the determi-
nant.
• Interchanging two rows or columns changes the sign of the determinant: det(B) =
− det(A).
2. det(A · B) = det(A) · det(B).
3. det(A−1 ) = 1
det(A)
, if A is invertible.

4. det(A⊤ ) = det(A).
5. If a matrix A is singular, det(A) = 0.

11 Minors and Cofactors


The minor Mij of an n × n matrix A is the determinant of the (n − 1) × (n − 1) submatrix
obtained by removing the ith row and jth column of A.
Aij = (−1)i+j Mij


1 2 3
5 6
Example: For A = 4 5 6, M11 = = 5 · 9 − 6 · 8 = −3.
8 9
7 8 9

12 Cofactor Expansion
The determinant can be calculated using cofactor expansion along any row or column.
For row i: n
X
det(A) = (−1)i+j aij Mij .
j=1
 
3 −2 0
Example: For A = 1
 0 1, expand along the first row:
−2 3 0
0 1 1 1
det(A) = 3 − (−2) .
3 0 −2 0
Calculation yields det(A) = −5.

11
13 Special Matrices
• The determinant of a diagonal or triangular matrix is the product of its diagonal
entries.

• For a block diagonal matrix:


r
Y
det(A) = det(Aii ),
i=1

where Aii are the square diagonal blocks.

14 Inversion and Cramer’s Rule


The inverse of a matrix A (if det(A) ̸= 0) can be computed using:
1
A−1 = · (Ac )⊤ ,
det(A)

where Ac is the cofactor matrix.


Cramer’s Rule: For a system Ax = b, the solution for xi is:

det(Ai )
xi = ,
det(A)

where Ai is obtained
 by replacing the ith column of A with b.
2x + z = 1

Example: Solve y − 2z = 0 .

x + y + z = −1

2 0 1
det(A) = 0 1 −2 = 5,
1 1 1
1 0 1
det(A1 ) = 0 1 −2 = 4.
−1 1 1
−6 −3
Thus, x = 54 , y = 5
,z = 5
.

Definitions and Key Concepts


Eigenvalues and Eigenvectors
Let A be an n × n matrix. A scalar λ ∈ R is an eigenvalue of A if there exists a nonzero
vector v ∈ Rn such that:
Av = λv. (8)
The vector v is called an eigenvector of A associated with the eigenvalue λ. The zero
vector is never considered an eigenvector.

12
Eigenspace
The eigenspace of A corresponding to an eigenvalue λ is defined as:

N(A − λI) = {v ̸= 0 | (A − λI)v = 0}, (9)

where I is the identity matrix.

The Characteristic Equation


To find eigenvalues, solve the characteristic equation:

det(A − λI) = 0. (10)


 
a b
For a 2 × 2 matrix A = , the characteristic equation is:
c d

λ2 − (a + d)λ + (ad − bc) = 0. (11)

Example
 
2 0
Consider A = . The characteristic equation is:
0 3
 
2−λ 0
det(A − λI) = det
0 3−λ
= (2 − λ)(3 − λ) = 0.
   
1 0
The eigenvalues are λ1 = 2 and λ2 = 3. Eigenvectors are v1 = and v2 = .
0 1

Diagonalization
A matrix A is diagonalizable if there exists an invertible matrix S and a diagonal matrix
Λ such that:
A = SΛS −1 , (12)
where the columns of S are eigenvectors of A, and Λ contains the corresponding eigen-
values on its diagonal.

Example
 
4 1
Consider A = . The characteristic equation is:
2 3
 
4−λ 1
det(A − λI) = det
2 3−λ
= (4 − λ)(3 − λ) − 2 = λ2 − 7λ + 10.

13
The eigenvalues
  areλ1 = 5 and λ2 = 2. Solving (A − λI)v = 0, we find eigenvectors
1 −1
v1 = and v2 = . Thus:
2 1
   
1 −1 5 0
S= , Λ= .
2 1 0 2

Properties of Similar Matrices


• Similar matrices share the same eigenvalues.

• If A and B are similar, det(A) = det(B) and rank(A) = rank(B).

Real Symmetric Matrices


A real symmetric matrix A has:
• Real eigenvalues.

• Orthonormal eigenvectors.
It can be diagonalized as:
A = SΛS ⊤ , (13)
where S is an orthogonal matrix.

Key Theorems
Linear Independence of Eigenvectors
If λ1 , λ2 , . . . , λr are distinct eigenvalues of A, their corresponding eigenvectors v1 , v2 , . . . , vr
are linearly independent.

Diagonalizability Criterion
An n × n matrix is diagonalizable if and only if it has n linearly independent eigenvectors.

15 Linear Transformations
15.1 Definition
A function T : U → V between vector spaces over the same field K is called a linear
transformation if for all u1 , u2 ∈ U and α, β ∈ K:

T (u1 + u2 ) = T (u1 ) + T (u2 ),


T (αu) = αT (u).

This can be combined into a single condition:

T (αu1 + βu2 ) = αT (u1 ) + βT (u2 ).

14
15.2 Properties
Let T : U → V be a linear map:
• T (0U ) = 0V .
• T (−u) = −T (u) for all u ∈ U .

15.3 Examples
   
2 1 x
1. Let U = R2 , V = R2 , and let T be defined by the matrix A = . For x = 1 ,
0 3 x2
the map T (x) = Ax becomes:
      
x1 2 1 x1 2x1 + x2
T = = .
x2 0 3 x2 3x2

16 Isomorphisms
16.1 Definition
A linear map T : U → V is an isomorphism if it is a bijection. Two vector spaces U
and V are isomorphic if there exists an isomorphism T : U → V .

16.2 Key Properties


Let T : U → V be an isomorphism:
• T −1 : V → U is also a linear map.
• Linear independence, spanning, and basis properties are preserved under isomor-
phisms.

16.3 Example
 
2 2 1 2
Let T : R → R be defined by T (x) = Ax, where A = . To check if T is an
3 4
isomorphism, we compute det(A):
det(A) = 1 · 4 − 2 · 3 = −2 ̸= 0.
Since det(A) ̸= 0, T is invertible, and thus an isomorphism.

17 Linear Maps and Matrices


17.1 Representation of Linear Maps
Given bases {e1 , . . . , en } for U and {v1 , . . . , vm } for V , the matrix A = (aij ) of a linear
map T : U → V is defined by:
Xm
T (ej ) = aij vi ,
i=1
where aij ∈ K are uniquely determined.

15
17.2 Action on Coordinates
Let u ∈ U with coordinates [u]U relative to a basis of U . Then T (u) has coordinates
[T (u)]V satisfying:
[T (u)]V = A[u]U .

17.3 Example
 
 x1
3 2 1 0 −1
Let T : R → R be defined by the matrix A = . For x = x2 , we compute:

2 1 3
x3
   
x1   x1  
1 0 −1   x1 − x3
T  x2  = x2 = .
2 1 3 2x1 + x2 + 3x3
x3 x3

18 Operations on Linear Maps


18.1 Addition and Scalar Multiplication
Let T1 , T2 : U → V and α ∈ K. Define:

(T1 + T2 )(u) = T1 (u) + T2 (u),


(αT1 )(u) = αT1 (u).

18.2 Example: Addition of Linear Maps


Let T1 , T2 : R2 → R2 be defined by T1 (x) = A1 x and T2 (x) = A2 x, where:
   
1 2 0 1
A1 = , A2 = .
0 1 −1 0

The sum T1 + T2 corresponds to the matrix A1 + A2 :


     
1 2 0 1 1 3
A1 + A2 = + = .
0 1 −1 0 −1 1

18.3 Composition
For T1 : U → V and T2 : V → W , define:

(T2 T1 )(u) = T2 (T1 (u)).

The matrix of T2 T1 is the product of the matrices of T1 and T2 .

18.4 Example: Composition of Linear Maps


Let T1 : R2 → R2 and T2 : R2 → R2 be defined by:

T1 (x) = A1 x, T2 (x) = A2 x,

16
where:    
1 0 0 1
A1 = , A2 = .
2 1 −1 0
The composition T2 T1 corresponds to:
    
0 1 1 0 2 1
A2 A1 = = .
−1 0 2 1 −1 0

19 Kernels and Images


19.1 Definitions
For T : U → V :
• The image of T is im(T ) = {T (u) | u ∈ U }.
• The kernel of T is ker(T ) = {u ∈ U | T (u) = 0V }.

19.2 Step-by-Step Example: Finding Kernel and Image


 
3 2 1 0 −1
Let T : R → R be defined by the matrix A = .
2 1 3

Step 1: Kernel (Solve Ax = 0) To find ker(T ), solve the homogeneous system


Ax = 0:  
  x1  
1 0 −1   0
x2 = .
2 1 3 0
x3
Expanding this system gives:

x1 − x3 = 0,
2x1 + x2 + 3x3 = 0.

From the first equation, x1 = x3 . Substituting x1 = x3 into the second equation gives:

2x3 + x2 + 3x3 = 0 =⇒ x2 = −5x3 .

Thus, the kernel is:  


1
ker(T ) = span −5 .
1

Step 2: Image (Find Span of Columns of A) To find im(T ), take the columns of
A:      
1 0 −1
Columns of A = , , .
2 1 3
Check for linear independence using the determinant of the 2 × 2 submatrices. We find
that the first two columns are linearly independent, so:
   
1 0
im(T ) = span , .
2 1

17
20 Invertibility and Isomorphisms
20.1 Invertibility Conditions
For T : U → V with dim(U ) = dim(V ) = n, the following are equivalent:
• T is bijective.
• ker(T ) = {0}.
• rank(T ) = n.

20.2 Matrix Invertibility


A matrix A is invertible if there exists A−1 such that:
AA−1 = In , A−1 A = In .

20.3 Example
 
2 1
Let A = . To compute A−1 , we find:
1 1
det(A) = 2 · 1 − 1 · 1 = 1.
Thus, A is invertible, and its inverse is given by:
   
−1 1 1 −1 1 −1
A = = .
det(A) −1 2 −1 2

Matrix Decompositions
Eigendecomposition
A real symmetric matrix M can be decomposed as:
M = SΛS −1 , (14)
where S is an orthogonal matrix (columns are eigenvectors) and Λ is a diagonal matrix
(eigenvalues). This decomposition applies only to symmetric matrices.
For example, let:    
Its eigenvalues are λ1 = 5 and λ2 = 2, with eigenvectors v1 = 1 1 and v2 = −1 1 .
Thus,

Singular Value Decomposition (SVD)


The SVD generalizes eigendecomposition to any m × n real matrix A:
A = U ΣV ⊤ , (15)
where:
• U is an m × m orthogonal matrix (left-singular vectors),
• Σ is an m × n diagonal matrix (singular values),
• V is an n × n orthogonal matrix (right-singular vectors).

18
Properties of SVD:

• The singular values in Σ are non-negative and ordered in decreasing magnitude.

• The left-singular vectors (columns of U ) are eigenvectors of AA⊤ .

• The right-singular vectors (columns of V ) are eigenvectors of A⊤ A.

• The non-zero eigenvalues of AA⊤ and A⊤ A are the squared singular values of A.
 
Example: For A = 1 2 3 4 5 6 ,

The eigenvalues
√ are λ 1 ≈ 91.14 and λ 2 ≈ 0.86. The singular values are σ1 ≈ 91.14 and
σ2 ≈ 0.86.

Applications of SVD
• Low-rank matrix approximation: useful for image compression, recommender sys-
tems, and data reduction.

• Least squares solutions in linear regression.

• Noise reduction in data.

Principal Component Analysis (PCA)


Overview
PCA identifies an orthonormal basis for data that maximizes variance along successive
axes. For a data matrix X (centered and scaled):

• Principal components correspond to eigenvectors of the covariance matrix S =


X ⊤ X.

• Variances along components are proportional to eigenvalues of S.

Key Steps:

1. Center the data: subtract the mean from each feature.

2. Compute the covariance matrix S = X ⊤ X.

3. Perform eigendecomposition or SVD on S.

Example: Given data:


S = X ⊤ X = 5 2 2 5 . The eigenvalues are λ1 = 7, λ2 = 3, and the eigenvectors are:
 

19
Applications of PCA
• Dimensionality reduction in machine learning.

• Exploratory data analysis.

• Noise filtering.

• Visualization of high-dimensional data.

Variance Explained:
λi
Proportion of variance explained by component i = P . (16)
j λj

20

You might also like