Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
14 views3 pages

Summary Basic Theory of The Class

The document provides a basic theory on eigenvectors, eigenvalues, and orthogonal systems. It defines eigenvectors and eigenvalues, explains their significance in matrix analysis, and outlines the steps to find them. Additionally, it covers orthogonal matrices, orthonormal bases, and the process of orthogonal diagonalization of real symmetric matrices.

Uploaded by

maithanhvan2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views3 pages

Summary Basic Theory of The Class

The document provides a basic theory on eigenvectors, eigenvalues, and orthogonal systems. It defines eigenvectors and eigenvalues, explains their significance in matrix analysis, and outlines the steps to find them. Additionally, it covers orthogonal matrices, orthonormal bases, and the process of orthogonal diagonalization of real symmetric matrices.

Uploaded by

maithanhvan2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

SUMMARY BASIC THEORY

1. Eigenvector and eigenvalue


1.1. Definition
- Let A be a square n×n matrix. An eigenvector of A is a nonzero vector x in Rn,
such that for some scalar λ, we have
Ax=λx
- The scalar λ is called an eigenvalue of the matrix A
- Eigenspace: If A is an n×n matrix with an eigenvalue λ, then the set of all
eigenvectors of λ, together with the zero vector is a subspace of Rn . This subspace
is called the eigenspace of λ.
1.2. The meaning
- Knowledge of eigenvectors and eigenvalues gives us deep insights into the
structure of the matrix. They reduce something complicated, namely matrix-vector
multiplication, to something simple, namely scalar-vector multiplication.
- Eigenvalues can tell us about invertibility
1.3. How to find the eigenvectors, eigenvalues
- Step 1: Find PA(λ) = |A − λI|, and then solve |A − λI| = 0 to find λ.
- Step 2: For every λi solve the system (A − λi I )x = 0 to find the corresponding
eigenvectors x.
- Step 3: The subspace of Rn of all eigenvectors assosiated with λi is called the
eigenspace of A assosiated with λi , and is denoted by Eλi .

2. Orthogonal Systems, Orthonormal Basis, Orthogonal Matrix


Diagonalization
2.1. Orthogonal Systems and Orthonormal Basis
2.1.1. Definition
- A basis u1,u2,…,um ∈ Rm is called orthogonal if each vector is nonzero, and the
inner product of any two distinct vectors equals 0
- A basis u1,u2,…,um ∈ Rm is called orthonormal if it is orthogonal and the
Euclidean norm of each vector equals 1
T T
U U =U U =I
2.1.2. Some Properties
T T
- det (U )=det(U ) và det (U ) det(U )=det ( I )=1
- An orthogonal matrix represents a rotation of a vector.
- A matrix U ∈ Mₙ(R) is called an orthogonal matrix if U-1 = UT.
2.1.3. Proposition :
- A matrix A is an orthogonal matrix if and only if its column vector set (or row
vector set) is an orthonormal set.
- If the product of A and AT is the identity matrix I, then A is an orthogonal matrix.
2.2. Orthogonal Diagonalization
2.2.1. Definition
- A square, real matrix A is said to be orthogonally diagonalizable if A = P.D.P-1 =
P.D.PT, where D is a diagonal matrix and P is an orthogonal matrix.
- If the matrix A can be orthogonally diagonalized, then A is a symmetric matrix.
2.2.2. The Gram-Schmidt Process
Let E= { e 1 , e2 ,... , e m } be a linearly independent set in the vector space V. Then there exists
an orthogonal set:
F= { f 1 , f 2 ,... , f m } satisfying< e1 , e 2 , ... , em >¿< f 1 , f 2 , ..., f m >¿
2.2.3. Orthogonal Diagonalization of a Real Symmetric Matrix A
- Step 1:Find the eigenvalues of A
- Step 2: Find an orthonormal basis for each eigenspace. To find an orthonormal basis for
the eigenspace Ehk we follow these steps:
a) Select an arbitrary basis Ek of Ehk.
b) Use the Gram-Schmidt process (if necessary) to determine an orthogonal basis Fk.
c) Normalize each vector in Fk by dividing it by its magnitude to obtain an
orthonormal basis
- Step 3: Conclude
The matrix A can always be orthogonally diagonalized. That is A= P.D.PT where the
diagonal matrix D has the eigenvalues of A on its diagonal, and the columns of the
orthogonal matrix P are the eigenvectors of A in the orthonormal basis obtained in step 2.

You might also like