Chapter 3: Determinants
Week 7
Section 3.2: Properties of Determinants
In this section we are going to see what algebraic properties
exist for determinants of matrices and how we can use them.
Theorem
Let A be a square matrix.
1. If a multiple of one row is added to another row to produce
matrix B, then det(A) = det(B).
2. If two rows are interchanged to obtain matrix B, then
det(A) = −det(B).
3. If one row is scaled by k to obtain B, then
kdet(A) = det(B).
Examples of Computing Determinant
Example
Compute the determinants of the following matrices
1 3 0 2
1 5 −4 2 6 0
−2 −5 7 4
−1 −4 5 , 1 3 2 , .
3 5 2 1
−2 −8 7 3 9 2
1 −1 2 −3
Determinants and Invertibility
Theorem
A square matrix is invertible if and only if det(A) 6= 0.
Proof.
Suppose A is invertible, that means its row echelon form will
have n pivots, and the product of the pivots is the determinant,
which is nonzero. Likewise, if the matrix has a determinant of 0,
one of the diagonal elements must be zero, and therefore the
matrix has less than n pivots.
Other Properties of Determinant
Theorem
For a square matrix A, det(AT ) = det(A).
Theorem
If A and B are n × n matrices, then det(AB) = det(A)det(B).
Theorem
If A is an invertible matrix, then det(A−1 ) = 1/det(A).
Examples
Example
Determine
ifthefollowing
set of vectors is linearly independent.
4 −7 −3
6 , 0 , −5
2 7 −2
Example
Prove that det(PAP −1 ) = det(A), for any invertible n × n matrix
P and A any n × n matrix.
Section 3.3: Cramer’s Rule
Some of you may have seen Cramer’s rule before for solving
2 × 2 systems of linear equations back in high school. The rule
can be generalized to solve any n × n system.
Theorem
Let A be an invertible n × n matrix. For any b in R[ n], the
unique solution x of Ax = b has entries given by
detAi (b)
xi =
detA
Example
Example
Use Cramer’s rule to solve
3x1 − 2x2 = 6
−5x1 + 4x2 = 8
An Application of Cramer’s Rule
Since A−1 satisfies the property that the jth column is a vector
x which satisfies Ax = ej , we have the formula:
det(Ai )(ej ) Cji
{(i, j) − entry of A−1 } = xi = =
detA detA
Example
3 5 4
Find the inverse of 1 0 1 by using the adjoint method.
2 1 1
Determinants as Area or Volume
Theorem
If A is a 2 × 2 matrix, the area of the parallelogram determined
by the columns of A is |detA|. If A is a 3 × 3 matrix, the volume
of the parallelpiped determined by the columns of A is |detA|.
Example
Find the volume of the parallelpiped determined by the vertices
(0, 0, 0), (1, 0, −3), (1, 2, 4), (5, 1, 0).
Linear Transformations
Theorem
Let T : R2 → R2 be the linear transformation determined by a
2 × 2 matrix A. If S is a parallelogram in R2 , then
{ area of T (S)} = |detA| · { area of S}
Let T : R3 → R3 be the linear transformation determined by a
3 × 3 matrix A. If S is a parallelpiped in R3 , then
{ volume of T (S)} = |detA| · { volume of S}
Chapter 4: Vector Spaces
It turns out that our investigation, which started with expressing
solutions of systems of linear equations as ordered lists
(vectors) leads us to a more general notion of all the important
structural aspects of Rn , which we will call a vector space. It
turns out that, in some sense, Rn is really the only vector space
for each finite dimension n, but it is often to our advantage to
consider more abstract vector spaces.
Section 4.1: Vector Spaces and Subspaces
Definition
A (real) vector space is a nonempty set V , elements of V are
called vectors and must satisfy the following properties for
u, v, w ∈ V , and c, d ∈ R
a) Closed under addition
b) Commutativity of addition
c) Associativity of addition
d) Zero vector
e) Additive inverse
f) Closed under scaling
g) Scalar distribution
h) Vector distribution
i) Mixed associativity
j) Unit multiples
Examples
1. Rn
2. Directed line segments
3. Infinite sequences
4. Polynomials of degree n
5. m × n matrices
6. Power series
7. Smooth functions
8. The set of all vectors x such that Ax = 0
Subspaces
Usually, we have a framework where we know the larger space
is a vector space (for example Rn ). If we have a subset of a
vector space, it is actually very easy to tell whether or not the
subset is a vector space in its own right because most of the
properties are automatically inherited. We need only check two
items.
Theorem
If U is a subset of a vector space V , then U is a vector space
(subspace) if the following properties hold:
1. For all u, v ∈ U, then u − v ∈ U.
2. For all r ∈ R and v ∈ U, then r u ∈ U.
Span and Subspaces
Definition
Given a set S, we say that V is the subspace spanned by S if
V = Span(S).
Example
Show that the span of sets of size 0, 1, and 2 are all subsets of
Rn where n ≥ 2.
Section 4.2: Null Spaces, Column Spaces, and Linear
Transformations
The two most common ways of identifying subspaces of Rn are
subspaces which are the solution set to Ax = 0 (i.e. the
homogeneous solutions), and Span(S) (i.e. the space spanned
by a given set of vectors). We have been working with these
subspaces since the beginning of the course without giving
them a name. Since we have already done the hard work on
showing how to identify these spaces, all that is left to do is put
forward the technical terminology, and we can investigate some
more of their properties.
Null Space
Definition
Given an m × n matrix A, we define Nul(A) as the set of all
x ∈ Rn such that Ax = 0.
Theorem
Nul(A) is a subspace of Rn .
Proof.
Example
Example
Find a set of vectors which span
Nul(A) where
−3 6 −1 1 −7
A = 1 −2 2 3 −1.
2 −4 5 8 −4
The Column Space of a Matrix
Definition
The column space of an m × n matrix A is the set of all linear
combinations of the columns of A. In other words
ColA = Spana1 , a2 , ...an where the ai are the columns of A.
Theorem
ColA is a subspace of Rm .
Proof.
Example
Example
2 4 −2 1
Find both ColA and NulA for A = −2 −5 7 3.
3 7 −8 6
Connection to Linear Transformations
We already defined linear transformations on Rn , linear
transformations on arbitrary vector spaces V are defined in
exactly the same way, they preserve addition and scalar
multiples.
Definition
For a linear transformation T , we define the kernel of T which is
kerT = {v ∈ V : T (v) = 0}. Also, we define
Im(T ) = {w ∈ W : T (v) = w for some v ∈ V }. How does this
relate to matrix transformations?