Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views9 pages

Course Linear Algebra

The document provides a comprehensive overview of basic notions in linear algebra, including vector spaces, matrices, determinants, rank, and linear systems. It covers essential concepts such as linear combinations, independence, matrix operations, and the properties of eigenvalues and eigenvectors. Key theorems and definitions are presented to facilitate understanding of these mathematical principles in the context of economics and finance.

Uploaded by

nawrasmortaja82
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views9 pages

Course Linear Algebra

The document provides a comprehensive overview of basic notions in linear algebra, including vector spaces, matrices, determinants, rank, and linear systems. It covers essential concepts such as linear combinations, independence, matrix operations, and the properties of eigenvalues and eigenvectors. Key theorems and definitions are presented to facilitate understanding of these mathematical principles in the context of economics and finance.

Uploaded by

nawrasmortaja82
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Université Paris I Panthéon-Sorbonne

Lecture Notes
Master Methematical Models in Economics and Finance
(MMEF)

BASIC NOTIONS OF
LINEAR ALGEBRA
(short summary)

Michel GRABISCH

1
1 Vector spaces
A vector space over R is a set V closed under addition (associative and commutative, with
a neutral element ~0 (the zero vector), and additive inverses), and scalar multiplication, i.e.,
multiplication of a vector by a real number, satisfying the following properties for all a, b ∈ R
and x, y ∈ V :

a(x + y) = ax + ay, (a + b)x = ax + by, a(bx) = (ab)x, 1x = x.

In the whole document, we will restrict to vector spaces which are subsets of Rn , for
some n ∈ N.
 
1
 4 
Vectors are represented as columns, e.g., x =  .
 
 0 
−2
A subspace of a vector space V is a subset of V which is a vector space.
A linear combination of vectors x1 , . . . , xk ∈ V is any expression α1 x1 + · · · + αk xk with
α1 , . . . , αk ∈ R. The span of x1 , . . . , xk is the set of all their linear combinations:

span{x1 , . . . , xk } = {α1 x1 + · · · + αk xk : α1 , . . . , αk ∈ R}.

x1 , . . . , xk ∈ V are linearly dependent if there exist α1 , . . . , αk ∈ R, not all zero, such that
k
X
αi xi = ~0 (zero vector)
i=1

x1 , . . . , xk ∈ V are linearly independent if they are not linearly dependent, i.e., for all α1 , . . . , αk ∈
R,
k
X
αi xi = ~0 ⇒ α1 = · · · = αk = 0.
i=1

{x1 , . . . , xk } is a basis of V if span{x1 , . . . , xk } = V and x1 , . . . , xk are linearly independent.


Consequently, any v ∈ V has a unique expression as a linear combination of x1 , . . . , xk . The
dimension of V is the size (cardinality) of a basis of V .

2 Matrices
A m × n matrix is an array of numbers in R with m rows and n columns. The usual notation
is A = [aij ], where aij is the entry of A at row i and column j.
The transpose of a m × n matrix A = [aij ] is the n × m matrix AT = [aji ].
The trace of a m × n matrix A = [aij ] is defined by
k
X
trA = aii , with k = min(m, n).
i=1

Any m × n matrix A defines a linear mapping from Rn to Rm by:


n
P 
j=1 a1j xj
x ∈ Rn 7→ Ax = 
 ..  ∈ Rm .

 . 
Pn
j=1 amj xj

2
The range of A is the range (image) of the corresponding linear mapping, i.e.,

rangeA = {y ∈ Rm : y = Ax for some x ∈ Rn }.

The null space or kernel of A is defined by

KerA = {x ∈ Rn : Ax = ~0}

A fundamental result (called rank-nullity theorem) says that

dim(rangeA) + dim(KerA) = n.

Matrix operations:

(i) For A, B ∈ Rm×n with A = [aij ] and B = [bij ], A + B = [aij + bij ].


Pn
(ii) For A ∈ Rm×n and B ∈ Rn×p , AB = [ k=1 aik bkj ] ∈ Rm×p .

The identity matrix of order n, denoted by In , is a n × n matrix given by


 
1 0 ··· 0
0 1 · · · 0


In =  .. 

 . 

0 0 ··· 1

Remark 1. (i) If x, y ∈ Rn , xT y ∈ R and xy T ∈ Rn×n , as a vector is considered as an n × 1


matrix.

(ii) Let A ∈ Rm×n , x ∈ Rn , y ∈ Rm . Then Ax ∈ Rm is a linear combination of the columns


of A, while y T A ∈ Rn is a linear combination of the rows of A.

3 Determinants
For square matrices, determinants are defined inductively by:

- For a 1 × 1 matrix [a11 ]: det[a11 ] = a11 .

- Otherwise,
n
X n
X
i+k
detA = (−1) aik detAik = (−1)k+j akj detAkj ,
k=1 k=1
for arbitrary i, j, and Aik is the matrix A without row i and column k.

For example, with n = 2: " #


a a
det 11 12 = a11 a22 − a12 a21 .
a21 a22

Important results:

(i) detAT = detA

(ii) detAB = detA detB

(iii) detIn = 1

3
(iv) detA = 0 if and only if a subset of the row vectors (equiv., column vectors) of A are
linearly dependent.

(v) If a row of A is ~0T , then detA = 0.

To each matrix A ∈ Rm×n corresponds a unique reduced row echelon form (RREF) (also
called Hermite normal form) such that:

(i) Any zero row occurs at the bottom of the matrix

(ii) The leading entry (i.e., the first nonzero entry) of any nonzero row is 1

(iii) All other entries in the column of a leading entry are zero

(iv) The leading entries occur in a stair step pattern, from left to right: leading entry aik ⇒
leading entry ai+1,` (if it exists) with ` > k.

Example of a RREF:  
0 1 −1 0 0 2
0 0 0 1 0 −3
A=
 
0 0 0 0 1 4

0 0 0 0 0 0
The RREF is obtained from a matrix by

(i) interchanging rows

(ii) multiply a row by a nonzero scalar

(iii) a row is replaced by the sum of itself and another row multiplied by a scalar.

Important result: for a matrix A ∈ Rn×n , detA 6= 0 if and only if its RREF is In .

4 Rank and nonsingularity ; inverse


The rank of a matrix A ∈ Rm×n is the dimension of its range, i.e., the cardinality of a largest
linearly independent set of columns (equiv., of rows) of A.
Important results:

- rankA = rankAT

- rankA is the rank of its RREF, which is the number of leading entries.

Theorem 1 (characterization of the rank). Let A be a m × n matrix. The following are


equivalent:

(i) rankA = k

(ii) k, and no more than k, rows of A are linearly independent

(iii) k, and no more than k, columns of A are linearly independent

(iv) Some k × k submatrix of A has a nonzero determinant, and any (k + 1) × (k + 1) submatrix


has a zero-determinant

(v) k = n − dim(KerA) (rank-nullity theorem).

4
A matrix A ∈ Rm×n is nonsingular if Ax = ~0 ⇔ x = ~0. Otherwise, A is singular. Observe
that if m < n then A is singular.
A matrix A ∈ Rn×n is invertible if there exists a matrix A−1 ∈ Rn×n such that A−1 A =
1
AA−1 = In . Note that detA−1 = .
detA
Theorem 2 (characterization of nonsingularity). Let A ∈ Rn×n . The following are equiv-
alent:
(i) A is nonsingular

(ii) A−1 exists

(iii) rankA = n

(iv) rows are linearly independent

(v) columns are linearly independent

(vi) detA 6= 0

(vii) dim(rangeA) = n

(viii) dim(KerA) = 0

5 Linear systems
A linear system of equalities has the form
+ ···

 a11 x1
 + a1n xn = b1
.. ..
 . .
am1 x1 + · · · + amn xn = bm

with aij , bj ∈ R for all i, j. Using matrix notation, this can be rewritten as

Ax = b
h i h i
with A = [aij ], bT = b1 · · · bm , and xT = x1 · · · xn .
The Gauss-Jordan elimination method, which leads to the set of solutions of the system,
consists in putting the augmented matrix [A b] in RREF. Indeed, A1 x = b1 and A2 x = b2 have
the same set of solutions ⇔ [A1 b1 ] and [A2 b2 ] have the same RREF.
A linear system is consistent if there exists at least one solution. Otherwise, the linear
system is inconsistent.
Theorem 3 (characterization of consistency). Let A ∈ Rm×n , b ∈ Rm . The linear system
Ax = b is consistent if and only if rank[A b] = rankA.

(set of solutions) Suppose Ax = b is consistent, with solution x0 . Observe that x00 is solution
iff Ax00 = b = Ax0 iff A(x00 − x0 ) = 0 iff x00 − x0 ∈ KerA. Consequently, the set of solutions has
the form
{x0 } + KerA
where “+00 is understood in the sense of subspaces. Therefore, the dimension of the (affine)
subspace of solutions is dim(KerA).

5
Theorem 4 (characterization of consistent square linear systems). Let A ∈ Rn×n . The
following are equivalent:

(i) Ax = b is consistent for each b ∈ Rn

(ii) Ax = ~0 has a unique solution, which is x = ~0

(iii) Ax = b has a unique solution for each b ∈ Rn

(iv) A is nonsingular

(v) A−1 exists

(vi) rankA = n.

If one of the above assertions holds, then the unique solution is x = A−1 b.

(back to Gauss-Jordan elimination) suppose [A b] of the consistent linear system Ax = b


has been put in RREF. According to Theorem 3, the number of leading variables (entries) is
the rank of A, the remaining variables are the free variables, whose number gives the dimension
of KerA.
h Ax = b isi inconsistent if and only if in the RREF of [A b] there is a row of the form
0 · · · 0 a with a 6= 0.

Example 1. Consider the linear system



 2x
 + y − z + 3t = 1
4x + 2y − z + 4t = 5

 2x + y + t = 4

The augmented matrix is  


2 1 −1 3 1
[A b] = 4 2 −1 4 5
 
2 1 0 1 4
Let us put it in echelon form1 . Subtracting 2 times row 1 from row 2, and subtracting row 1
from row 3 yield  
2 1 −1 3 1
0 0 1 −2 3
 
0 0 1 −2 3
Now, adding row 2 and minus row 3 yields
 
2 1 −1 3 1
0 0 1 −2 3
 
0 0 0 0 0

Finally, adding the two first rows yields


 
2 1 0 1 4
0 0 1 −2 3
 
0 0 0 0 0
1
In putting in RREF, to solve linear systems, it is not necessary to make the leading entry equal to 1.

6
Then the system has a solution. There are two free variables y and t, therefore the dimension
of the subspace of solutions is 2. Let us express the set of solutions. The system is
(
2x + y + t = 4
z − 2t = 3

Putting the free variables on the right handside yields:


(
2x = 4 − y − t
z = 3 + 2t

Hence, finally the set of solutions is given by


1 1
{(x, y, z, t) ∈ R4 : x = 2 − y − t, z = 3 + 2t, y, t ∈ R}.
2 2
In particular, (2, 0, 3, 0) is a solution.

6 Introduction to eigenvalues and eigenvectors


Basic definitions. Given a square matrix A ∈ Cn×n , if there exists a scalar λ ∈ C and a
nonzero vector x ∈ Cn such that
Ax = λx,
then λ is an eigenvalue of A and x is an eigenvector of A. We say that (λ, x) is an eigenpair
and the eigenspace of A associated to λ is the vector subspace {x ∈ Cn : Ax = λx}.
Eigenvectors of distinct eigenvalues are linearly independent.
y ∈ Cn is a left eigenvector of A associated to λ if y ∗ A = λy ∗ , where ()∗ indicates the
conjugate transpose.
The spectrum of A, denoted by σ(A), is the set of all eigenvalues. Remark that ~0 ∈ σ(A) iff
A is singular. The spectral radius of A is defined by

ρ(A) = max{|λ|, λ ∈ σ(A)}.

Finding eigenvalues. Eigenvalues can be found as the solutions of the characteristic poly-
nomial of A:
det(λIn − A) = 0.
Indeed, Ax = λx is equivalent to the system (A−λIn )x = ~0, which can admit a nonzero solution
x if and only if A − λIn is singular, i.e., with zero determinant.
The characteristic polynomial is a polynomial in λ of degree n, which therefore admits n
(complex) solutions, not necessarily distinct.
Important properties:
Pn
• The trace of A is the sum of the eigenvalues: trA = i=1 λi
Qn
• The determinant of A is the product of the eigenvalues: detA = i=1 λi

7
Multiplicities. Let λ1 , . . . , λq be the distinct eigenvalues of A. The algebraic multiplicity αi
of λi is the multiplicity of λi as a root of the characteristic polynomial. It holds
q
X
αi = n.
i=1

The geometric multiplicity γi of λi is the dimension of its eigenspace (dimension of the kernel
of A − λi In ). We have
1 6 γi 6 αi , i = 1, . . . , q.
The eigenvalue λi is simple if αi = 1. It is semi-simple if αi = γi .

Diagonalization. A is said to be diagonalizable if there exists a nonsingular matrix S ∈ Cn×n


such that SAS −1 is a diagonal matrix.
Let A ∈ Cn×n , and λ1 , . . . , λq be its (distinct) eigenvalues. Then A is diagonalizable iff
Pq
i=1 γi = n.
This amounts to say that A is diagonalizable if and only if there exists n linearly independent
eigenvectors x1 , . . . , xn , in which case S = [x1 · · · xn ], and
 
λ1 0 ··· 0
0

λ2 ··· 0 
SAS −1

=
 .. .. .. 
 . . .


0 ··· 0 λn

Jordan decomposition. If a matrix is not diagonalizable, it can be always put into its Jordan
form. A Jordan block of size m is a m × m matrix of the form
 
λ 1

 λ 1 


Jm (λ) =  .. .. 
 . . 

λ

1


λ

and J1 (λ) = [λ]. A Jordan matrix is block-diagonal and each block is a Jordan block.
Theorem 5 (Jordan decomposition). Let T ∈ Cn×n . There exists a nonsingular matrix
S ∈ Cn×n such that
 
Jm1 (λ1 )

 Jm2 (λ2 ) 
 −1
T =S .. S

 . 

Jmq (λq )

with qi=1 mi = n, λ1 , . . . , λq are the eigenvalues of T , and the geometric multiplicity of λi is


P

equal to the number of blocks Jmi (λi ), while the algebraic multiplicity is the sum of the sizes
of blocks Jmi (λi ).
If all eigenvalues are semi-simple, then the columns of S are the right eigenvectors, while
the rows of S −1 are the left eigenvectors.
Let T ∈ Cm×m with Jordan reduction SJT S −1 . Then

T k = SJTk S −1 .

8
A square matrix A is semi-convergent if limk→∞ Ak exists, and it is convergent if in addition
this limit is the matrix 0.
We have the following properties:

• A Jordan block is convergent iff |λ| < 1;

• A Jordan block of size 1 is semi-convergent iff |λ| < 1 or λ = 1.

From this we deduce the convergence of T k :

Theorem 6. • T is convergent iff ρ(T ) < 1;

• T is semi-convergent iff either ρ(T ) < 1 or 1 is a semisimple eigenvalue and all other
eigenvalues have modulus less than 1.

You might also like