Linear Algebra
Lecture Notes
Dr. Öğr. Üyesi Sümeyye BAKIM
2022-2023 Spring
1
MATRICES
2
MATRICES
3
4
Definition:
Example:
5
Matrix Operations: Matrix Addition
Definition:
Example:
6
Matrix Operations: Scalar Multiplication
Definition:
Example:
7
Matrix Operations: Subtraction
Definition:
Example:
8
Linear
Combination:
Example:
9
Matrix Operations: Transpose
Definition:
Example:
10
Example1.
11
Solution :
12
For Example 1 (try yourself):
13
Example2.
Solution :
14
Example3.
Example4.
15
Matrix Multiplication
If 𝐴 = 𝑎!" is an 𝑚𝑥𝑝 matrix and B= 𝑏!" is an p𝑥𝑛 matrix, then
Definition: the product of 𝐴 and 𝐵 denoted 𝐴𝐵, is the 𝑚𝑥𝑛 matrix C= 𝑐!" ,
defined by
𝑐!" = 𝑎!#𝑏#" + 𝑎!$𝑏$" + ⋯ + 𝑎!% 𝑏%"
%
= ∑&'# 𝑎!& 𝑏&" 1 ≤ 𝑖 ≤ 𝑚, 1 ≤ 𝑗 ≤ 𝑛
Observe that the product of 𝐴 and 𝐵 is defined only when the
number of rows of 𝐵 is exactly the same as the number of
columns of 𝐴 :
16
Example5.
Solution :
17
Remark: If 𝒖 and 𝒗 are n-vectors (nx1 matrices), then it is
easy to show by matrix multiplication that
𝒖. 𝒗 = 𝒖𝑻 𝒗
Example 6.
18
LINEAR SYSTEMS
19
The entries in the product Ax
are merely the left sides of
the equation. Hence the
linear system can be written
in matrix form as:
𝐴𝒙 = 𝒃
The matrix 𝑨 is called the
coefficient matrix.
The matrix obtained by adjoining column b to 𝐴 is
called augmented matrix, (written as 𝐴 ⋮ 𝒃 )
If
𝑏! = 𝑏" = ⋯ = 𝑏# = 0
the linear system is called a
homogeneous system, written as:
𝐴𝒙 = 𝟎
20
Example 7.
21
Example 8.
Remark: 𝐴𝑥 = 𝑏 is consistent if and only if b can be expressed as a
linear combination of the columns of the matrix 𝐴.
22
Example 9.
Answer: 𝑥 = ± 2, 𝑦 = ±3
Example 10.
Answer: (a) -5 (b) 𝐵𝐴!
23
Example 11. If 𝐴 = 𝑎!" is an 𝑛𝑥𝑛 matrix, then the trace of 𝐴, 𝑇𝑟(𝐴), is
defined as the sum of all elements on the main diagonal of
𝐴, 𝑇𝑟 𝐴 = ∑1!'# 𝑎!! . Compute the trace of each of the
following matrices:
2 2 3 1 0 0
1 0
𝐴= 𝐵= 2 4 4 𝐶= 0 1 0
2 3
3 −2 −5 0 0 1
Example 12.
24
Algebraic Properties of Matrix Operations
Properties of Matrix Addition:
25
Properties of Matrix Multiplication:
Remark: AB need not always equal BA. This is the first significant
difference between multiplication of matrices and multiplication of real
numbers.
26
Properties of Scalar Multiplication:
Example 13.
27
Properties of Transpose:
Remark: If a and b are real numbers, then ab = 0 can hold only if a or b
is zero. However, this is not true for matrices.
28
Remark: If 𝑎, 𝑏 and 𝑐 are real numbers for which 𝑎𝑏 = 𝑎𝑐 and a ≠ 0, it
follows that 𝑏 = 𝑐. That is, we can cancel out the nonzero factor 𝑎.
However the cancellation does not hold for matrices.
29
Example 14.
30
Example 15.
31
Special Type of Matrices and Partitioned
Matrices
Diagonal Diagonal Diagonal
Scalar Scalar
Identitiy 32
33
Upper & Lower Triangular Matrices
Example 16.
34
Symmetric & Skew-Symmetric Matrices
35
Partitioned Matrices
36
Example 17.
Partitioned matrices can be used to great advantage when matrices exceed the
memory capacity of a computer
37
Nonsingular Matrices
Remark: Previously we have stated that if 𝐴𝐵 = 𝐼% , then 𝐵𝐴 = 𝐼% , Thus
to verify that B is an inverse of A, we need verify only that 𝐴𝐵 = 𝐼%
Theorem: The inverse of a matrix, if it exists, is unique.
38
Example 18.
39
Example 19.
40
Linear Systems and Inverses
41
Example 20.
42
Example 21.
Example 22.
Example 23.
43
Example 24.
Consider the linear system 𝐴𝑥 = 𝑏,
Example 25.
44
SOLVING LINEAR SYSTEMS
45
Echelon Form Reduced Echelon Form
46
The following matrices are in echelon form. The leading entries ∎ may
have any nonzero value; the starred entries ∗ may have any value
(including zero)
The following matrices are in reduced echelon form. The leading entries
are 1’s and there are 0’s below and above each leading 1.
Remark: Each matrix is row equivalent to one and only one reduced
echelon matrix. 47
Example 1.
48
Pivot Positions
The squares ∎ identify the pivot positions
49
Elementary row (column) operations
50
Example 2. Reduce the matrix A
below to echelon form and locate
the pivot columns of A.
Create zeros below the pivot,
Interchange rows 1 and 4 Choose next pivot : 2 Create zeros below the pivot
Interchange rows 3 and 4
51
Example 3. Apply elementary row operations to transform the
given matrix first into echelon form and then into reduced form:
52
The combination
of steps 1-4 is
called the forward
phase of the row
reduction
algorithm. Step 5,
which produces
the unique
reduced echelon
form, is called
backward phase.
53
Example 4. Use elementary row operations to reduce the
given matrix to echelon form
Example 5. Find the reduced echelon form of the given
matrices
54
Solutions of Linear Systems
55
Example 6. Find the general solution of the linear system
whose augmented matrix:
56
Existence and Uniqueness of Solutions
57
Example 7.
Example 8.
58
Example 9.
59
Example 10. Example 11.
60
Gaussian Elimination & Gauss-Jordan Reduction
The method where 𝐶 ⋮ 𝑑 is in row echelon form
is called Gaussian Elimination; the method
where 𝐶 ⋮ 𝑑 is in reduced row echelon form is
called Gauss-Jordan Reduction.
61
We now solve a linear system both by Gaussian elimination and by Gauss-Jordon reduction:
Example 12.
62
Example 13. To balance given equation determine the values: 𝑥, 𝑦, 𝑧 and 𝑤
We obtain four linear equations:
63
The rank of a matrix A, written rank A, is equal to the
number of pivots in an echelon form of A.
Example 13.
64
Linear Independence
65
Example 14.
Solution:
66
Linear Independence & Consistency
r l y t
a rly nt e a en
i n e de L in e n d
L pe n e p
d e D
In
CONSISTENT
67
Rank & Consistency
Consider the system 𝐴𝑥 = 𝑏, with coefficient matrix A and augmented matrix
𝐴 ⋮ 𝑏 . The size of 𝑏, 𝐴 and 𝐴 ⋮ 𝑏 are 𝑚×1, 𝑚×𝑛, 𝑎𝑛𝑑 𝑚×(𝑛 + 1), respectively; in
addition, the number of unknowns is 𝑛. Below, we summarize the possibilities for
solving the system.
𝑖. 𝐴𝑥 = 𝑏 is inconsistent (no solution exists) if and only if 𝑟𝑎𝑛𝑘 𝐴 < 𝑟𝑎𝑛𝑘 𝐴 ⋮ 𝑏
𝑖𝑖. 𝐴𝑥 = 𝑏 has a unique solution if and only if 𝑟𝑎𝑛𝑘 𝐴 = 𝑟𝑎𝑛𝑘 𝐴 ⋮ 𝑏 = 𝑛
𝑖𝑖𝑖. 𝐴𝑥 = 𝑏 has infinitely many solutions if and only if 𝑟𝑎𝑛𝑘 𝐴 = 𝑟𝑎𝑛𝑘 𝐴 ⋮ 𝑏 < 𝑛
68
69
%&
Gauss Jordon Algorithm for finding 𝐴
An 𝑛×𝑛 matrix 𝐴 is invertible if and only if 𝑟𝑎𝑛𝑘𝐴 = 𝑛
Example 15.
𝑟𝑎𝑛𝑘𝐴 = 𝑛 = 3
70
Example 16.
71
Example 17.
72
LU FACTORIZATION AND
DETERMINANTS
73
ELEMENTARY MATRICES
An elementary matrix is one that is obtained by performing a single elementary
row operation on an identity matrix.
An 𝒏𝒙𝒏 elementary matrix of type I, type II, or type III is a matrix obtained
from the identity matrix 𝐼1 by performing a single elementary row or
elementary column operation of type I, type II or type III, respectively.
74
Example 1.
75
Example 2.
Solution:
76
Example 3.
77
78
Example 4.
𝑟𝑎𝑛𝑘𝐴 = 1 < 𝑛 = 2
Infinitely many solutions
noninvertible
79
Since row operations are reversible, elementary matrices are invertible, for if 𝐸 is produced by a
row operation on 𝐼, then there is another row operation of the same type that changes 𝐸 back
into I. Hence there is an elementary matrix 𝐹 such that 𝐹𝐸 = 𝐼. Since 𝐸 and 𝐹 correspond to
reverse operations, 𝐸𝐹 = 𝐼, too. 80
Example 5.
Example 6.
Example 7.
81
LU Factorization
Assume that 𝐴 is an 𝑚𝑥𝑚 matrix that can be row reduced to echelon form, without row
interchanges. Then 𝐴 can be written in the form 𝐴 = 𝐿𝑈, where 𝐿 is an 𝑚𝑥𝑚 lower
triangular matrix with 1’s on the diagonal and 𝑈 is an 𝑚𝑥𝑛 echelon form of 𝐴. Such a
factorization is called an 𝑳𝑼 factorization of 𝐴. The matrix 𝐿 is invertible and is called a
unit lower triangular matrix.
82
When 𝐴 = 𝐿𝑈, the equation 𝑨𝒙 = 𝒃 can be written as 𝑳 𝑼𝒙 = 𝒃. Writing 𝒚 for
𝑼𝒙, we can find 𝒙 by solving the pair of equations:
First solve 𝐋𝐲 = 𝒃 for 𝑦, then solve 𝑼𝒙 = 𝒚 for 𝒙. Each equation is easy to solve
because 𝐿 and 𝑈 are triangular.
83
Example 8. Let
Solution:
16
84
LU Factorization Algorithm
85
Example 9.
86
Example 10.
Solution:
87
Determinants
A value called the determinant of 𝐴, that we denote by 𝑑𝑒𝑡 𝐴 or 𝐴 , corresponds
to every square matrix 𝐴.
Calculation of Determinants:
𝟏𝒙𝟏 →
𝟐𝒙𝟐 →
88
𝟑𝒙𝟑 →
Sarrus Method:
OR
Example 11.
89
Cofactor Expansions
Definition of a Minor
90
Definition of a Cofactor
91
Cofactor Expansion
Example 12.
92
Alternative Method to Calculate Determinants
Let us choose the third column, we then Let us choose the second column, we then
multiply each element by its corresponding multiply each element by its corresponding
minor: minor:
The respective signs of the elements of the The respective signs of the elements of the
third column tell us the operations carry out third column tell us the operations carry out
between these values to obtain the between these values to obtain the
determinant: determinant:
𝑑𝑒𝑡𝐴 = − −6 + 0 − 0 = 6
93
Example 13.
Example 14.
94
Example 15.
𝑑𝑒𝑡𝐴 = 16
Example 16. Evaluate
𝑑𝑒𝑡𝐴 = 10
95
PROPERTIES OF
DETERMINANTS
96
The secret of determinants lies in how they change when row
operations performed.
Theorem
97
Example 1.
98
Example 2.
99
Suppose a square matrix A has been reduced to echelon form 𝑈 by row replacement and row
interchanges. If there are r interchanges, then the above theorem shows that
𝑑𝑒𝑡𝐴 = −1 $ 𝑑𝑒𝑡𝑈
Since 𝑈 is in echelon form, it is triangular, and so 𝑑𝑒𝑡𝑈 is the product of the diagonal entries
𝑢!! , … , 𝑢%% . If 𝐴 is invertible, the entries 𝑢&& are all pivots (because 𝐴~𝐼% and 𝑢&& have not been scaled
to 1’s). Otherwise, at least 𝑢%% is zero, and the product 𝑢!! , … , 𝑢%% is zero.
It is interesting to note that although the echelon form 𝑈 described above is not unique (because it is
not completely row reduced), and the pivots are not unique, the product of the pivots is unique,
except for a possible minus sign.
100
Invertible matrix & determinant
𝑑𝑒𝑡𝐴 = 0 Columns (rows) of 𝐴 are 𝐴 is singular
linearly dependent
Linear dependence is obvious when two columns(rows) are the same or a column (row) is zero
Example 3.
101
Example 3.
102
Column operations have the same effects on determinants as row
operations. Rows of A is columns of 𝑨𝑻 .
Example 4. Verify 𝑑𝑒𝑡𝐴𝐵 = 𝑑𝑒𝑡𝐴 𝑑𝑒𝑡𝐵 for
Warning: 𝑑𝑒𝑡 𝐴 + 𝐵 ≠ 𝑑𝑒𝑡𝐴 + 𝑑𝑒𝑡𝐵
103
Example 5.
Example 6.
The matrix 𝒗𝟏 𝒗𝟐 𝒗𝟑 is not invertible. The columns are linearly dependent.
104
CRAMER’S RULE VOLUME AND
TRANSFORMATIONS
105
Cramer’s Rule
106
Example 1.
107
Example 2.
108
%&
A formula for 𝐴 (Adjoint)
𝑎 𝑏 𝑎 𝑏 "# 1 𝑑 −𝑏
Let 𝐴 = =
𝑐 𝑑 𝑐 𝑑 𝑑𝑒𝑡𝐴 −𝑐 𝑎
𝑑 −𝑐
Cofactor of 𝐴 =
−𝑏 𝑎 𝐶$
109
Example 3.
110
Span
111
Geometric Description of 𝑺𝒑𝒂𝒏 𝒗 and 𝑺𝒑𝒂𝒏 𝒖, 𝒗
Let 𝒗 be a nonzero vector in 𝑅A. Then 𝑺𝒑𝒂𝒏 𝒗 is
the set of all scalar multiples of 𝒗, which is the
set of point on the line in 𝑅A through 𝒗 and 𝟎.
If 𝒖 and 𝒗 are nonzero vector in 𝑅A, with 𝒗 not a
multiple of 𝒖, then 𝑺𝒑𝒂𝒏 𝒖, 𝒗 is the plane in 𝑅A
that contains 𝒖 and 𝒗, and 𝟎. In particular,
𝑺𝒑𝒂𝒏 𝒖, 𝒗 contains the line in 𝑅A through 𝒖 and
𝟎 and the line through 𝒗 and 𝟎.
112
Example 4.
113
Example 5.
114
Example 6. Find span of the
solution space
115
Example 7.
116
Remember:
Question “Is 𝒃 in 𝑺𝒑𝒂𝒏 𝒂𝟏 , … , 𝒂𝒏 ?” Is equivalent to the question “Is 𝑨𝒙 = 𝒃
consistent?”
117
Example 8.
118
Theorem
119
Example 9.
120
Determinants as Area of Volume
121
Example 10. Calculate the area of parallelogram determined by the
points −2, −2 , 0,3 , 4, −1 and 6,4
First translate the parallelogram to one having the origin as a vertex. For example
the vertex −2, −2 from each of the four vertices. The new parallelogram has
the same area, and its vertices are 0,0 , 2,5 , 6,1 and 8,6
Translating a
parallelogram does
not change its area.
122
Linear Transformations
“Matrix 𝑨 acts on a vector 𝒙 by multiplication to produce a new vector called 𝑨𝒙”
The equations:
Say that multiplication by 𝑨 transformes 𝒙 into 𝒃 and transformes 𝒖 into zero vector.
123
Domain, codomain and range of 𝑇: 𝑅 % → 𝑅 &
124
Matrix Transformations(Mapping)
125
Example 11.
126
Linear Transformation (Mapping)
Every matrix transformation is a linear transformation
Superposition Principle
127
Example 12.
128
Example 13.
Note that 𝑇 𝑢 + 𝑣 = 𝑇 𝑢 + 𝑇(𝑣). 𝑇 rotates 𝑢, 𝑣 and 𝑢 + 𝑣 counterclockwise about the origin through 90°
rotation transformation
129
EIGENVALUES, EIGENVECTORS
and DIAGONALIZATION
130
Example 1.
131
Example 2.
132
1 6
Example 3. Show that 7 is an eigenvalue of 𝐴 = and find the corresponding
5 2
eigenvectors.
The scalar 7 is an eigenvalue 𝐴 if and only if the equation
𝐴𝑥 = 7𝑥
Has a nontrivial solution. This equation is equivalent to 𝐴𝑥 − 7𝑥 = 0 or
𝐴 − 7𝐼 𝑥 = 0
To solve this homogeneous equation, form the matrix
1 6 7 0 −6 6
𝐴 − 7𝐼 = − =
5 2 0 7 5 −5
Augmented form of the equation:
6 6 0 1 −1 0
~
5 −5 0 0 0 0
1 1
General solution has the form 𝑥! or 𝑥" with 𝑥! ≠ 0 (or 𝑥" ≠ 0) is an
1 1
eigenvector corresponding to 𝜆 = 7
133
𝜆 is an eigenvalue of an 𝑛𝑥𝑛 matrix 𝐴 if and only if the equation
𝐴 − 𝜆𝐼 𝑥 = 0
has a nontrivial solution. 𝐴 − 𝜆𝐼 set is subspace of 𝑅 % and is called the
eigenspace of 𝐴 corresponding to 𝜆.
134
Example 4.
2𝑥( − 𝑥) + 6𝑥* = 0,
𝑥* = 𝑟
𝑥) = 𝑠
𝑠
𝑥( = −3𝑟 +
2
𝑥( −3 1/2
𝑥) = 𝑟 0 + 𝑠 1
𝑥* 1 0
135
Let and
The eigenvalues of 𝐴 are 3, 0, 2. The eigenvalues of 𝐵 are 4 and 1.
What does it mean for a matrix 𝐴 to have an eigenvalue 0? This
happens if and only if the equation
𝑨𝒙 = 𝟎𝒙
has a nontrivial solution. This equation is equivalent to 𝑨𝒙 = 𝟎,
which has a nontrivial solution if and only if 𝐴 is noninvertible.
Thus 𝟎 is an eigenvalue of 𝑨 if and only if A is not invertible.
𝑑𝑒𝑡𝐴 = 0 → 𝐴 is not invertible → 0 is eigenvalue
136
Example 5.
137
2 3
Example 6. Find the eigenvalues of 𝐴 =
3 −6
We must find all scalars 𝜆 such that the matrix equation
𝐴 − 𝜆𝐼 𝑥 = 0
Has a nontrivial solution. This problem is equivalent to finding all 𝜆 such that the matrix
𝐴 − 𝜆𝐼 is not invertible,
2 3 𝜆 0 2−𝜆 3
𝐴 − 𝜆𝐼 = − =
3 −6 0 𝜆 3 −6−𝜆
This matrix fails to be invertible precisely when its determinant is zero. So the
eigenvalues of 𝐴 are the solutions of the equation:
2−𝜆 3
𝑑𝑒𝑡 𝐴 − 𝜆𝐼 = 𝑑𝑒𝑡 =0 Eigenvectors:
3 −6−𝜆 3
for 𝜆 = 3 → 𝑥 =
𝑑𝑒𝑡 𝐴 − 𝜆𝐼 = 2 − 𝜆 −6 − 𝜆 − 9 1
1
= −12 + 6𝜆 − 2𝜆 + 𝜆! − 9 for 𝜆 = −7 → 𝑥 =
! −3
= 𝜆 + 4𝜆 − 21
= 𝜆−3 𝜆+7
If 𝑑𝑒𝑡 𝐴 − 𝜆𝐼 = 0 then 𝜆 = 3 or 𝜆 = −7. So the eigenvalues of 𝐴 are 3 and −7.
138
The Characteristic Equation
Example 7.
𝜆=5
𝜆=3
𝜆=1
139
Example 8. The characteristic polynomial of a 6x6 matrix is 𝜆# − 4𝜆$ − 12𝜆% .
Find the eigenvalues and their multiplicities.
Example 9.
140
Diagonalization
In many cases, the eigenvalue-eigenvector information contained within a matrix
𝐴 can be displayed in a useful factorization of the form 𝑨 = 𝑷𝑫𝑷Q𝟏 where 𝐷 is
diagonal matrix. Factorization enables us to compute 𝑨𝒌 quickly for large values of 𝒌,
a fundamental idea in several applications of linear algebra.
Following example illustrates that powers of a diagonal matrix are easy to compute.
Example 10.
141
Example 11.
142
143
Example 12.
Step1. Find the eigenvalues of 𝑨 Step3. Construct P from vectors
Step2. Find 3 eigenvectors Step4. Construct D with
eigenvalues (order is important)
1
𝑣 ≠ −1
0
144
Step5. Check for 𝑨 = 𝑷𝑫𝑷'𝟏
145
Example 13.
𝐴 is not diagonalizable
146
Example 14.
Since the matrix is triangular, its eigenvalues are obviously 5, 0 and −2.
Since 𝐴 is a 3𝑥3 matrix with three distinct eigenvalues, 𝐴 is diagonalizable.
Remember:
147
Example 15.
148
Example 16.
149
Example 17.
150