SUMMARY BASIC THEORY
1. Eigenvector and eigenvalue
1.1. Definition
- Let A be a square n×n matrix. An eigenvector of A is a nonzero vector x in Rn,
such that for some scalar λ, we have
Ax=λx
- The scalar λ is called an eigenvalue of the matrix A
- Eigenspace: If A is an n×n matrix with an eigenvalue λ, then the set of all
eigenvectors of λ, together with the zero vector is a subspace of Rn . This subspace
is called the eigenspace of λ.
1.2. The meaning
- Knowledge of eigenvectors and eigenvalues gives us deep insights into the
structure of the matrix. They reduce something complicated, namely matrix-vector
multiplication, to something simple, namely scalar-vector multiplication.
- Eigenvalues can tell us about invertibility
1.3. How to find the eigenvectors, eigenvalues
- Step 1: Find PA(λ) = |A − λI|, and then solve |A − λI| = 0 to find λ.
- Step 2: For every λi solve the system (A − λi I )x = 0 to find the corresponding
eigenvectors x.
- Step 3: The subspace of Rn of all eigenvectors assosiated with λi is called the
eigenspace of A assosiated with λi , and is denoted by Eλi .
2. Orthogonal Systems, Orthonormal Basis, Orthogonal Matrix
Diagonalization
2.1. Orthogonal Systems and Orthonormal Basis
2.1.1. Definition
- A basis u1,u2,…,um ∈ Rm is called orthogonal if each vector is nonzero, and the
inner product of any two distinct vectors equals 0
- A basis u1,u2,…,um ∈ Rm is called orthonormal if it is orthogonal and the
Euclidean norm of each vector equals 1
T T
U U =U U =I
2.1.2. Some Properties
T T
- det (U )=det(U ) và det (U ) det(U )=det ( I )=1
- An orthogonal matrix represents a rotation of a vector.
- A matrix U ∈ Mₙ(R) is called an orthogonal matrix if U-1 = UT.
2.1.3. Proposition :
- A matrix A is an orthogonal matrix if and only if its column vector set (or row
vector set) is an orthonormal set.
- If the product of A and AT is the identity matrix I, then A is an orthogonal matrix.
2.2. Orthogonal Diagonalization
2.2.1. Definition
- A square, real matrix A is said to be orthogonally diagonalizable if A = P.D.P-1 =
P.D.PT, where D is a diagonal matrix and P is an orthogonal matrix.
- If the matrix A can be orthogonally diagonalized, then A is a symmetric matrix.
2.2.2. The Gram-Schmidt Process
Let E= { e 1 , e2 ,... , e m } be a linearly independent set in the vector space V. Then there exists
an orthogonal set:
F= { f 1 , f 2 ,... , f m } satisfying< e1 , e 2 , ... , em >¿< f 1 , f 2 , ..., f m >¿
2.2.3. Orthogonal Diagonalization of a Real Symmetric Matrix A
- Step 1:Find the eigenvalues of A
- Step 2: Find an orthonormal basis for each eigenspace. To find an orthonormal basis for
the eigenspace Ehk we follow these steps:
a) Select an arbitrary basis Ek of Ehk.
b) Use the Gram-Schmidt process (if necessary) to determine an orthogonal basis Fk.
c) Normalize each vector in Fk by dividing it by its magnitude to obtain an
orthonormal basis
- Step 3: Conclude
The matrix A can always be orthogonally diagonalized. That is A= P.D.PT where the
diagonal matrix D has the eigenvalues of A on its diagonal, and the columns of the
orthogonal matrix P are the eigenvectors of A in the orthonormal basis obtained in step 2.