A Gentle Introduction To Abstract Algebra
A Gentle Introduction To Abstract Algebra
ABSTRACT ALGEBRA
by
B.A. Sethuraman
California State University Northridge
ii
Copyright 2012 B.A. Sethuraman.
Permission is granted to copy, distribute and/or modify this document under
the terms of the GNU Free Documentation License, Version 1.3 or any later
version published by the Free Software Foundation; with no Invariant Sec-
tions, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
is included in the section entitled GNU Free Documentation License.
Source les for this book are available at
<http://www.csun.edu/~asethura/giaa/>
Contents
Preface v
To the Student: How to Read a Mathematics Book ix
1 Divisibility in the Integers 1
2 Rings and Fields 23
2.1 Rings: Denition and Examples . . . . . . . . . . . . . . . . . 23
2.2 Subrings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3 Integral Domains and Fields . . . . . . . . . . . . . . . . . . . 45
2.4 Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.5 Quotient Rings . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.6 Ring Homomorphisms and Isomorphisms . . . . . . . . . . . 63
2.7 Further Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 77
3 Vector Spaces 95
3.1 Vector Spaces: Denition and Examples . . . . . . . . . . . . 95
3.2 Linear Independence, Bases, Dimension . . . . . . . . . . . . 103
3.3 Subspaces and Quotient Spaces . . . . . . . . . . . . . . . . . 125
3.4 Vector Space Homomorphisms: Linear Transformations . . . 133
3.5 Further Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 148
iii
iv CONTENTS
4 Groups 157
4.1 Groups: Denition and Examples . . . . . . . . . . . . . . . . 157
4.2 Subgroups, Cosets, Lagranges Theorem . . . . . . . . . . . . 180
4.3 Normal Subgroups, Quotient Groups . . . . . . . . . . . . . . 192
4.4 Group Homomorphisms and Isomorphisms . . . . . . . . . . . 197
4.5 Further Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 204
A Sets, Functions, and Relations 215
B Partially Ordered Sets, Zorns Lemma 219
C GNU Free Documentation License 227
GNU Free Documentation License 227
1. APPLICABILITY AND DEFINITIONS . . . . . . . . . . . . . 228
2. VERBATIM COPYING . . . . . . . . . . . . . . . . . . . . . . 230
3. COPYING IN QUANTITY . . . . . . . . . . . . . . . . . . . . 230
4. MODIFICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . 231
5. COMBINING DOCUMENTS . . . . . . . . . . . . . . . . . . . 233
6. COLLECTIONS OF DOCUMENTS . . . . . . . . . . . . . . . 234
7. AGGREGATION WITH INDEPENDENT WORKS . . . . . . 234
8. TRANSLATION . . . . . . . . . . . . . . . . . . . . . . . . . . 235
9. TERMINATION . . . . . . . . . . . . . . . . . . . . . . . . . . 235
10. FUTURE REVISIONS OF THIS LICENSE . . . . . . . . . . 236
11. RELICENSING . . . . . . . . . . . . . . . . . . . . . . . . . . 236
ADDENDUM: How to use this License for your documents . . . . 237
Preface
This book is a gentle introduction to abstract algebra. It is ideal as a text
for a one-semester course designed to provide a rst exposure of the subject
to students in mathematics, science, or engineering. Such a course would
teach students the basic objects of algebra, providing plentiful examples
and enough theory to allow interested students to transition easily to more
advanced abstract algebra. At the same time, this course would allow future
users of the subject, including students interested in various other subelds
of mathematics and students of science and engineering, to gain enough
familiarity with the objects of algebra to be able to study them further
within the manifold contexts in which they are needed.
Thus, this book deals with groups, rings and elds, and vector spaces.
The approach to these objects is elementary, with a focus on examples and
on computation with these examples. The book starts with rings, reect-
ing my experience that students nd rings easier to grasp as an abstraction
since they are already familiar with the integers, the rationals, the reals, the
complexes, 2 2 matrices with real entries, and polynomials with real coef-
cients. Vector spaces are treated next, followed by groups. It is expected
that students have had some exposure to proof-based mathematics, such as
can be obtained in basic proofs courses common in many American uni-
versities. Such students are likely to be familiar with the properties of the
integers already, but for completeness, a preliminary chapter on divisibility
in the integers has been included. Material on sets, functions, and relations,
v
vi Preface
that belong more commonly to a proofs course, has also been provided as
an appendix.
The style of the book is conversational (a style that mirrors my own
approach to teaching), with a stress on exposition. I have attempted to
show that there are some common themes to the study of the three objects:
rings, vector spaces, and groups. For each, I introduce the object using a
large number of examples. For each, I introduce their various subobjects
(subrings, ideals, subspaces, subgroups, normal subgroups), again with nu-
merous examples. I introduce quotient objects, and then for each object I
introduce the appropriate notion of homomorphism and isomorphism. I end
with the fundamental homomorphism theorem for each object. I nd that
when students see the same concept three dierent times in mildly dierent
guises, such as the notion of a structure preserving map, the notion of a
kernel, or the notion of an appropriate quotient object, they become quite
comfortable with these concepts by the end of the semester. For example, I
nd that they have no trouble with quotient groups (a traditionally dicult
idea to convey if abstract algebra is introduced rst through groups) since
they have already computed with quotient rings in more intuitive settings
such as the integers mod n or the polynomials over a eld mod a linear or
quadratic polynomial.
The entire material in the book can be covered in a traditional sixteen
week semester, judiciously speeding up here and there. Besides copious ex-
amples and exercises (most of a computational kind, based on the examples,
and some that extend the theory developed in the text), each chapter comes
with end notes: remarks about various aspects of the theory, occasional hints
to some exercise, and several glimpses into material beyond the course. The
book shares some material with an earlier text I wrote called Rings, Fields
and Vector Spaces, but the focus and end goal of the two books are quite
dierent.
I am grateful to the various faculty members at California State Univer-
Preface vii
sity Northridge who have taught the introductory abstract algebra course,
Math 360, for several years now from this book. I am also grateful to the
students in the course; together, both the faculty and students have pro-
vided valuable feedback. The National Science Foundation has supported
me professionally through two research grants during much of the time when
this book was being developed, and I am grateful to them.
I owe a special debt of gratitude to the most extraordinary student I have
ever worked with, one whom I have never met. He will remain unnamed
here. He is currently in prison, but rather than succumb to circumstances,
he chose the positive route, and enrolled in mathematics courses at Califor-
nia State University Northridge as an extension student. Faculty would send
him course material by U.S. mail (the only form of interaction he is allowed
under incarceration), and he would complete his assignments under super-
vision and mail them back. He oered to read through this book and give
suggestions, an oer I readily accepted. I was amazed when I received his
edit suggestions! I have yet to see such meticulousness in any student, such
attention to the right word, such alertness for the clumsy phrase. But more
importantly, he proved to be a brilliant student, and made several powerful
suggestions that went beyond the writing and into the mathematics. There
are many explanations here and many additional remarks that owe their
existence to him. (All errors that remain, of course, are to be blamed on
me.) I was privileged that he learned abstract algebra from this book, and
to him I would like to say: Thank you, my friend! I hope to meet you some
day.
B.A. Sethuraman
California State University Northridge
viii Preface
To the Student: How to
Read a Mathematics Book
How should you read a mathematics book? The answer, which applies to
every book on mathematics, and in particular to this one, can be given in
one wordactively. You may have heard this before, but it can never be
overstressedyou can only learn mathematics by doing mathematics. This
means much more than attempting all the problems assigned to you (al-
though attempting every problem assigned to you is a must). What it
means is that you should take time out to think through every sentence
and conrm every assertion made. You should accept nothing on trust; in-
stead, not only should you check every statement, you should also attempt
to go beyond what is stated, searching for patterns, looking for connections
with other material that you may have studied, and probing for possible
generalizations.
Let us consider an example. On page 29 in Chapter 2, you will nd the
following sentence:
Yet, even in this extremely familiar number system, multi-
plication is not commutative; for instance,
_
1 0
0 0
_
_
0 1
0 0
_
,=
_
0 1
0 0
_
_
1 0
0 0
_
.
(The number system referred to is the set of 2 2 matrices whose entries
ix
x How to Read a Mathematics Book
are real numbers.) When you read a sentence such as this, the rst thing
that you should do is verify the computation yourselves. Mathematical in-
sight comes from mathematical experience, and you cannot expect to gain
mathematical experience if you merely accept somebody elses word that
the product on the left side of the equation does not equal the product on
the right side.
The very process of multiplying out these matrices will make the set
of 2 2 matrices a more familiar system of objects, but as you do the
calculations, more things can happen if you keep your eyes and ears open.
Some or all of the following may occur:
1. You may notice that not only are the two products not the same,
but that the product on the right side gives you the zero matrix. This
should make you realize that although it may seem impossible that two
nonzero numbers can multiply out to zero, this is only because you
are conning your thinking to the real or complex numbers. Already,
the set of 22 matrices (with which you have at least some familiarity)
contains nonzero elements whose product is zero.
2. Intrigued by this, you may want to discover other pairs of nonzero
matrices that multiply out to zero. You will do this by taking arbitrary
pairs of matrices and determining their product. It is quite probable
that you will not nd an appropriate pair. At this point you may be
tempted to give up. However, you should not. You should try to be
creative, and study how the entries in the various pairs of matrices
you have selected aect the product. It may be possible for you to
change one or two entries in such a way that the product comes out
to be zero. For instance, suppose you consider the product
_
1 1
1 1
_
_
4 0
2 0
_
=
_
6 0
6 0
_
You should observe that no matter what the entries of the rst matrix
are, the product will always have zeros in the (1, 2) and the (2, 2) slots.
How to Read a Mathematics Book xi
This gives you some freedom to try to adjust the entries of the rst
matrix so that the (1, 1) and the (2, 1) slots also come out to be zero.
After some experimentation, you should be able to do this.
3. You may notice a pattern in the two matrices that appear in our
inequality on page ix. Both matrices have only one nonzero entry, and
that entry is a 1. Of course, the 1 occurs in dierent slots in the two
matrices. You may wonder what sorts of products occur if you take
similar pairs of matrices, but with the nonzero 1 occuring at other
locations. To settle your curiosity, you will multiply out pairs of such
matrices, such as
_
0 0
1 0
_
_
0 1
0 0
_
,
or
_
0 0
1 0
_
_
0 0
1 0
_
.
You will try to discern a pattern behind how such matrices multiply.
To help you describe this pattern, you will let e
i,j
stand for the matrix
with 1 in the (i, j)-th slot and zeros everywhere else, and you will try
to discover a formula for the product of e
i,j
and e
k,l
, where i, j, k, and
l can each be any element of the set 1, 2.
4. You may wonder whether the fact that we considered only 2 2 ma-
trices is signicant when considering noncommutative multiplication
or when considering the phenomenon of two nonzero elements that
multiply out to zero. You will ask yourselves whether the same phe-
nomena occur in the set of 3 3 matrices or 4 4 matrices. You will
next ask yourselves whether they occur in the set of n n matrices,
where n is arbitrary. But you will caution yourselves about letting n
be too arbitrary. Clearly n needs to be a positive integer, since nn
matrices is meaningless otherwise, but you will wonder whether n can
be allowed to equal 1 if you want such phenomena to occur.
xii How to Read a Mathematics Book
5. You may combine 3 and 4 above, and try to dene the matrices e
i,j
analogously in the general context of n n matrices. You will study
the product of such matrices in this general context and try to discover
a formula for their product.
Notice that a single sentence can lead to an enormous amount of mathe-
matical activity! Every step requires you to be alert and actively involved
in what you are doing. You observe patterns for yourselves, you ask your-
selves questions, and you try to answer these questions on your own. In
the process, you discover most of the mathematics yourselves. This is re-
ally the only way to learn mathematics (and in particular, it is the way
every professional mathematician has learned the subject). Mathematical
concepts are developed precisely because mathematicians observe patterns
in various mathematical objects (such as the 2 2 matrices), and to have a
good understanding of these concepts you must try to notice these patterns
for yourselves.
May you spend many many hours happily playing in the rich and beau-
tiful world of mathematics!
Exercises
1. Carry out the program in steps (1) through (5) above.
Chapter 1
Divisibility in the Integers
We will begin our study with a very concrete set of objects, the integers, that
is, the set 0, 1, 1, 2, 2, . . . . This set is traditionally denoted Z and is very
familiar to usin fact, we were introduced to this set so early in our lives that
we think of ourselves as having grown up with the integers. Moreover, we
view ourselves as having completely absorbed the process of integer division;
we unhesitatingly say that 3 divides 99 and equally unhesitatingly say that
5 does not divide 101.
As it turns out, this very familiar set of objects has an immense amount of
structure to it. It turns out, for instance, that there are certain distinguished
integers (the primes) that serve as building blocks for all other integers.
These primes are rather beguiling objects; their existence has been known
for over two thousand years, yet there are still several unanswered questions
about them. They serve as building blocks in the following sense: every
positive integer greater than 1 can be expressed uniquely as a product of
primes. (Negative integers less than 1 also factor into a product of primes,
except that they have a minus sign in front of the product.)
The fact that nearly every integer breaks up uniquely into building blocks
is an amazing one; this is a property that holds in very few number systems,
and our goal in this chapter is to establish this fact. (In the exercises to
1
2 CHAPTER 1. DIVISIBILITY IN THE INTEGERS
Chapter 2 we will see an example of a number system whose elements do not
factor uniquely into building blocks. Chapter 2 will also contain a discussion
of what a number system issee Remark 2.8.)
We will begin by examining the notion of divisibility and dening divisors
and multiples. We will study the division algorithm and how it follows from
the Well-Ordering Principle. We will explore greatest common divisors and
the notion of relative primeness. We will then introduce primes and prove
our factorization theorem. Finally, we will look at what is widely considered
as the ultimate illustration of the elegance of pure mathematicsEuclids
proof that there are innitely many primes.
Let us start with something that seems very innocuous, but is actually
rather profound. Write N for the set of nonnegative integers that is, N =
0, 1, 2, 3, . . . . (N stands for natural numbers, as the nonnegative integers
are sometimes referred to.) Let S be any nonempty subset of N. For example,
S could be the set 0, 5, 10, 15, . . . , or the set 1, 4, 9, 16, . . . , or else the
set 100, 1000. The following is rather obvious: there is an element in S
that is smaller than every other element in S, that is, S has a smallest or
least element. This fact, namely that every nonempty subset of N has a least
element, turns out to be a crucial reason why the integers possess all the
other beautiful properties (such as a notion of divisibility, and the existence
of prime factorizations) that make them so interesting.
Contrast the integers with another very familiar number system, the
rationals, that is, the set a/b [ a and b are integers, with b ,= 0. (This set
is traditionally denoted by Q.) Can you think of a nonempty subset of the
positive rationals that fails to have a least element?
We will take this property of the integers as a fundamental axiom, that
is, we will merely accept it as given and not try to prove it from more
fundamental principles. Also, we will give it a name:
Well-Ordering Principle: Every nonempty subset of the nonnegative
integers has a least element.
3
Now let us look at divisibility. Why do we say that 2 divides 6? It is
because there is another integer, namely 3, such that the product 2 times 3
exactly gives us 6. On the other hand, why do we say that 2 does not divide
7? This is because no matter how hard we search, we will not be able to
nd an integer b such that 2 times b equals 7. This idea will be the basis of
our denition:
Denition 1.1. A (nonzero) integer d is said to divide an integer a (denoted
d[a) if there exists an integer b such that a = db. If d divides a, then d is
referred to as a divisor of a or a factor of a, and a is referred to as a multiple
of d.
Observe that this is a slightly more general denition than most of us
are used toaccording to this denition, 2 divides 6 as well, since there
exists an integer, namely 3, such that 2 times 3 equals 6. Similarly, 2
divides 6, since 2 times 3 equals 6. More generally, if d divides a, then
all of the following are also true: d[ a, d[a, d[ a. (Take a minute to
prove this formally!) It is quite reasonable to include negative integers in
our concept of divisibility, but for convenience, we will often focus on the
case where the divisor is positive.
The following easy result will be very useful:
Lemma 1.2. If d is a nonzero integer such that d[a and d[b for two integers
a and b, then for any integers x and y, d[(xa +yb). (In particular, d[(a +b)
and d[(a b).)
Proof. Since d[a, a = dm for some integer m. Similarly, b = dn for some
integer n. Hence xa + yb = xdm + ydn = d(xm + yn). Since we have
succeeded in writing xa + yb as d times the integer xm + yn, we nd that
d[(xa+yb). As for the statement in the parentheses, taking x = 1 and y = 1,
we nd that d[a +b, and taking x = 1 and y = 1, we nd that d[a b. 2
Question 1.3. If a nonzero integer d divides both a and a + b,
must d divide b as well?
4 CHAPTER 1. DIVISIBILITY IN THE INTEGERS
The following lemma holds the key to the division process. Its statement
is often referred to as the division algorithm. The Well-Ordering Principle
plays a central role in its proof.
Lemma 1.4. (Division Algorithm) Given integers a and b with b > 0, there
exist unique integers q and r, with 0 r < b such that a = bq +r.
Remark 1.5. First, observe the range that r lies in. It is constrained to lie
between 0 and b 1 (with both 0 and b 1 included as possible values for
r). Next, observe that the lemma does not just state that integers q and r
exist with 0 r < b and a = bq + r, it goes furtherit states that these
integers q and r are unique. This means that if somehow one were to have
a = bq
1
+r
1
and a = bq
2
+r
2
for integers q
1
, r
1
, q
2
, and r
2
with 0 r
1
< b
and 0 r
2
< b, then q
1
must equal q
2
and r
1
must equal r
2
. The integer q is
referred to as the quotient and the integer r is referred to as the remainder.
Proof of Lemma 1.4. Let S be the set a bn [ n Z. Thus, S contains
the following integers: a (= a b 0), a b, a + b, a 2b, a + 2b, a 3b,
a+3b, etc. Let S
. If
a is negative, then a ba is nonnegative (check! remember that b itself is
positive, by hypothesis), so a ba S
is a nonempty subset of N, S
are nonnegative,
r must be nonnegative, that is 0 r. Now suppose r b. We will arrive
at a contradiction: Write r = b + x, where x 0 (why is x 0?). Writing
b + x for r in a = bq + r, we nd a = bq + b + x, or a = b(q + 1) + x, or
x = a b(q + 1). This form of x shows that x belongs to the set S (why?).
Since we have already seen that x 0, we nd further that x S
. But
more is true: since x = r b and b > 0, x must be less than r (why?). Thus,
x is an element of S
+ r
, for integers q, r, q
, and r
< b.
Then b(q q
) = r
r. Thus, r
= r. It follows
that b(q q
.
2
Observe that to test whether a given (positive) integer d divides a given
integer a, it is enough to write a as dq + r (0 r < d) as in Lemma 1.4
and examine whether the remainder r is zero or not. For d[a if and only
there exists an integer x such that a = dx. View this as a = dx +0. By the
uniqueness part of Lemma 1.4, we nd that a = dx + 0 if and only if b = x
and r = 0.
Now, given two nonzero integers a and b, it is natural to wonder whether
they have any divisors in common. Notice that 1 is automatically a common
divisor of a and b, no matter what a and b are. Recall that [a[ denotes the
absolute value of a, and notice that every divisor d of a is less than or equal
to [a[. (Why? Notice, too, that [a[ is a divisor of a.) Also, for every divisor
6 CHAPTER 1. DIVISIBILITY IN THE INTEGERS
d of a, we must have d [a[. (Why? Notice, too, that [a[ is a divisor
of a.) Similarly, every divisor d of b must be less than or equal to [b[ and
greater than or equal to [b[ (and both [b[ and [ b[ are divisors of b). It
follows that every common divisor of a and b must be less than or equal to
the lesser of [a[ and [b[, and must be greater than or equal to the greater of
[a[ and [b[. Thus, there are only nitely many common divisors of a and
b, and they all lie in the range max([a[, [b[) to min([a[, [b[).
We will now focus on a very special common divisor of a and b.
Denition 1.6. Given two (nonzero) integers a and b, the greatest common
divisor of a and b (written as gcd(a, b)) is the largest of the common divisors
of a and b.
Note that since there are only nitely many common divisors of a and
b, it makes sense to talk about the largest of the common divisors.
Question 1.7. By contrast, must an innite set of integers nec-
essarily have a largest element? Must an innite set of integers
necessarily fail to have a largest element? What would your answers
to these two questions be if we restricted our attention to an innite
set of positive integers? How about if we restricted our attention to
an innite set of negative integers?
Notice that since 1 is already a common divisor, the greatest common
divisor of a and b must be at least as large as 1. We can conclude from this
that the greatest common divisor of two nonzero integers a and b must be
positive.
Question 1.8. If p and q are two positive integers and if q divides
p, what must gcd(p, q) be?
See the notes on Page 20 for a discussion on the restriction that both a
and b be nonzero in Denition 1.6 above.
Let us derive an alternative formulation for the greatest common divisor
that will be very useful. Given two nonzero integers a and b, any integer
that can be expressed in the form xa+yb for some integers x and y is called
a linear combination of a and b. (For example, a = 1 a + 0 b is a linear
7
combination of a and b; so are 3a 5b, 6a +10b, b = 0 a +(1) b, etc.)
Write P for the set of linear combinations of a and b that are positive. (For
instance, if a = 2 and b = 3, then 2 = (1) 2 + (0) 3 would not be in
P as 2 is negative, but 7 = 2 2 + 3 would be in P as 7 is positive.) Now
here is something remarkable: the smallest element in P turns out to be the
greatest common divisor of a and b! We will prove this below.
Theorem 1.9. Given two nonzero integers a and b, let P be the set xa +
yb[x, y Z, xa + yb > 0. Let d be the least element in P. Then d =
gcd(a, b). Moreover, every element of P is divisible by d.
Proof. First observe that P is not empty. For if a > 0, then a P, and if
a < 0, then a P. Thus, since P is a nonempty subset of N (actually, of
the positive integers as well), the Well-Ordering Principle guarantees that
there is a least element d in P, as claimed in the statement of the theorem.
To show that d = gcd(a, b), we need to show that d is a common divisor
of a and b, and that d is the largest of all the common divisors of a and b.
First, since d P, and since every element in P is a linear combination
of a and b, d itself can be written as a linear combination of a and b. Thus,
there exist integers x and y such that d = xa +yb. (Note: These integers x
and y need not be unique. For instance, if a = 4 and b = 6, we can express
2 as both (1) 4 + 1 6 and (4) 4 + 3 6. However, this will not be a
problem; we will simply pick one pair x, y for which d = xa + yb and stick
to it.)
Let us show that d is a common divisor of a and b. Write a = dq +r for
integers d and r with 0 r < d (division algorithm). We need to show that
r = 0. Suppose to the contrary that r > 0. Write r = a dq. Substituting
xa + yb for d, we nd that r = (1 xq)a + (yq)b. Thus, r is a positive
linear combination of a and b that is less than da contradiction, since d is
the smallest positive linear combination of a and b. Hence r must be zero,
that is, d must divide a. Similarly, one can prove that d divides b as well,
so that d is indeed a common divisor of a and b.
8 CHAPTER 1. DIVISIBILITY IN THE INTEGERS
Now let us show that d is the largest of the common divisors of a and
b. This is the same as showing that if c is any common divisor of a and b,
then c must be no larger than d. So let c be any common divisor of a and b.
Then, by Lemma 1.2 and the fact that d = xa +yb, we nd that c[d. Thus,
c [d[ (why?). But since d is positive, [d[ is the same as d. Thus, c d, as
desired.
To prove the last statement of the theorem, note that we have already
proved that d[a and d[b. By Lemma 1.2, d must divide all linear combina-
tions of a and b, and must hence divide every element of P.
We have thus proved our theorem. 2
In the course of proving Theorem 1.9 above, we have actually proved
something else as well, which we will state as a separate result:
Proposition 1.10. Every common divisor of two nonzero integers a and b
divides their greatest common divisor.
Proof. As remarked above, the ideas behind the proof of this corollary are
already contained in the proof of Theorem 1.9 above. We saw there that if
c is any common divisor of a and b, then c must divide d, where d is the
minimum of the set P dened in the statement of the theorem. But this,
along with the other arguments in the proof of the theorem, showed that d
must be the greatest common divisor of a and b. Thus, to say that c divides
d is really to say that c divides the greatest common divisor of a and b, thus
proving the proposition. 2
Exercise 1.37 will yield yet another description of the greatest common
divisor.
9
Question 1.11. Given two nonzero integers a and b for which one
can nd integers x and y such that xa +yb = 2, can you conclude
from Theorem 1.9 that gcd(a, b) = 2? If not, why not? What,
then, are the possible values of gcd(a, b)? Now suppose there exist
integers x
and y
such that x
a + y
= a/p
1
= a/q
1
. If a
= 1, this
means that a = p
1
= q
1
, and there is nothing to prove, the factorization of
a is already unique. So assume that a
> 1. Then a
is a positive integer
greater than 1 and less than a, so by our assumption about a, any prime
factorization of a
is obtained by dividing a by p
1
(= q
1
), we nd
that a
= p
n
1
1
1
p
ns
s
= q
m
1
1
1
q
mt
t
So, by the uniqueness of prime factorization of a
, we nd that n
1
1 = m
1
1
14 CHAPTER 1. DIVISIBILITY IN THE INTEGERS
(so n
1
= m
1
), s = t, and after relabeling the primes if necessary, p
i
= q
i
,
and similarly, n
i
= m
i
, for i = 2, . . . , s(= t). This establishes that the two
prime factorizations of a we began with are indeed the same, except perhaps
for rearrangement.
2
Remark 1.21. While Theorem 1.19 only talks about integers greater than
1, a similar result holds for integers less than 1 as well: every integer less
than 1 can be factored as 1 times a product of primes, and these primes
are unique, except perhaps for order. This is clear, since, if a is a negative
integer less than 1, then a = 1 [a[, and of course, [a[ > 1 and therefore
admits unique prime factorization.
The following result follows easily from studying prime factorizations
and will be useful in the exercises:
Proposition 1.22. Let a and b be integers greater than 1. Then b divides
a if and only if the prime factors of b are a subset of the prime factors of a
and if a prime p occurs in the factorization of b with exponent y and in the
factorization of a with exponent x, then y x.
Proof. Let us assume that b[a, so a = bc for some integer c. If c = 1,
then a = b, and there is nothing to prove, the assertion is obvious. So
suppose c > 1. Then c also has a factorization into primes, and multiplying
together the prime factorizations of b and c, we get a factorization of bc into
a product of primes. On the other hand, bc is just a, and a has its own
prime factorization as well. By the uniqueness of prime factorizations, the
prime factorization of bc that we get from multiplying together the prime
factorizations of b and c must be the prime factorization of a. In particular,
the prime factors of b (and c) must be a subset of the prime factors of a. Now
suppose that a prime p occurs to the power x in the factorization of a, to the
power y in the factorization of b, and to the power z in the factorization of c.
Multiplying together the factorizations of b and c, we nd that p occurs to
15
the power y +z in the factorization of bc. Since the factorization of bc is just
the factorization of a and since p occurs to the power x in the factorization
of a, we nd that x = y + z. In particular, y x. This proves one half of
the proposition.
As for the converse, assume that b has the prime factorization b =
p
n
1
1
p
ns
s
. Then, by the hypothesis, the primes p
1
, . . . , p
s
must all appear in
the prime factorization of a with exponents at least n
1
, . . . , n
s
(respectively).
Thus, the prime factorization of a must look like a = p
m
1
1
p
ms
s
p
m
s+1
s+1
p
mt
t
,
where m
i
n
i
for i = 1, . . . , s, and where p
s+1
, . . . , p
t
are other primes.
Writing c for p
m
1
n
1
1
p
msns
s
p
m
s+1
s+1
p
mt
t
and noting that m
i
n
i
0
for i = 1, . . . , s by hypotheses, we nd that c is an integer, and of course,
clearly, a = (p
n
1
1
p
ns
s
)c, i.e, a = bc. This proves the converse.
2
We have proved the Fundamental Theorem of Arithmetic, but there
remains the question of showing that there are innitely many primes. The
proof that we provide is due to Euclid, and is justly celebrated for its beauty.
Theorem 1.23. (Euclid) There exist innitely many prime numbers.
Proof. Assume to the contrary that there are only nitely many primes.
Label them p
1
, p
2
, . . . , p
n
. (Thus, we assume that there are n primes.)
Consider the integer a = p
1
p
2
p
n
+ 1. Since a > 1, a admits a prime
factorization by Theorem 1.19. Let q be any prime factor of a. Since the
set p
1
, p
2
, . . . , p
n
contains all the primes, q must be in this set, so q
must equal, say, p
i
. But then, a = q(p
1
p
2
p
i1
p
i+1
p
n
) + 1, so we get
a remainder of 1 when we divide a by q. In other words, q cannot divide a.
This is a contradiction. Hence there must be innitely many primes! 2
Question 1.24. What is wrong with the following proof of The-
orem 1.23?There are innitely many positive integers. Each of
them factors into primes by Theorem 1.19. Hence there must be
innitely many primes.
16 CHAPTER 1. DIVISIBILITY IN THE INTEGERS
Further Exercises
Exercise 1.25. In this exercise, we will formally prove the validity of various
quick tests for divisibility that we learn in high school!
1. Prove that an integer is divisible by 2 if and only if the digit in the units
place is divisible by 2. (Hint: Look at a couple of examples: 58 = 510+8,
while 57 = 5 10 + 7. What does Lemma 1.2 suggest in the context of
these examples?)
2. Prove that an integer (with two or more digits) is divisible by 4 if and only
if the integer represented by the tens digit and the units digit is divisible
by 4. (To give you an example, the integer represented by the tens digit
and the units digit of 1024 is 24, and the assertion is that 1024 is divisible
by 4 if and only if 24 is divisible by 4which it is!)
3. Prove that an integer (with three or more digits) is divisible by 8 if and
only if the integer represented by the hundreds digit and the tens digit
and the units digit is divisible by 8.
4. Prove that an integer is divisible by 3 if and only if the sum of its digits is
divisible by 3. (For instance, the sum of the digits of 1024 is 1+0+2+4 =
7, and the assertion is that 1024 is divisible by 3 if and only if 7 is divisible
by 3and therefore, since 7 is not divisible by 3, we can conclude that
1024 is not divisible by 3 either! Here is a hint in the context of this
example: 1024 = 1 1000 +0 100 +2 10 +4 = 1 (999 +1) +0 (99 +
1) +2 (9 +1) +4. What can you say about the terms containing 9, 99,
and 999 as far as divisibility by 3 is concerned? Then, what does Lemma
1.2 suggest?)
5. Prove that an integer is divisible by 9 if and only if the sum of its digits
is divisible by 9.
6. Prove that an integer is divisible by 11 if and only if the dierence between
the sum of the digits in the units place, the hundreds place, the ten
thousands place, . . . (the places corresponding to the even powers of
10) and the sum of the digits in the tens place, the thousands place,
the hundred thousands place, . . . (the places corresponding to the odd
powers of 10) is divisible by 11. (Hint: 10 = 11 1, 100 = 99 + 1,
1000 = 1001 1, 10000 = 9999 + 1, etc. What can you say about the
integers 11, 99, 1001, 9999, etc. as far as divisibility by 11 is concerned?)
17
Exercise 1.26. Given nonzero integers a and b, with b > 0, write a = bq +r
(division algorithm). Show that gcd(a, b) = gcd(b, r).
(This exercise forms the basis for the Euclidean algorithm for nding the
greatest common divisor of two nonzero integers. For instance, how do we
nd the greatest common divisor of, say, 48 and 30 using this algorithm? We
divide 48 by 30 and nd a remainder of 18, then we divide 30 by 18 and
nd a remainder of 12, then we divide 18 by 12 and nd a remainder of 6,
and nally, we divide 12 by 6 and nd a remainder of 0. Since 6 divides 12
evenly, we claim that gcd(48, 30) = 6. What is the justication for this claim?
Well, applying the statement of this exercise to the rst division, we nd that
gcd(48, 30) = gcd(30, 18). Applying the statement to the second division,
we nd that gcd(30, 18) = gcd(18, 12). Applying the statement to the third
division, we nd that gcd(18, 12) = gcd(12, 6). Since the fourth division shows
that 6 divides 12 evenly, gcd(12, 6) = 6. Working our way backwards, we obtain
gcd(48, 30) = gcd(30, 18) = gcd(18, 12) = gcd(12, 6) = 6.)
Exercise 1.27. Given nonzero integers a and b, let h = a/gcd(a, b) and
k = b/gcd(a, b). Show that gcd(h, k) = 1.
Exercise 1.28. Show that if a and b are nonzero integers with gcd(a, b) = 1,
and if c is an arbitrary integer, then a[c and b[c together imply ab[c. Give a
counterexample to show that this result is false if gcd(a, b) ,= 1. (Hint: Just as
in the proof of Lemma 1.14, use the fact that gcd(a, b) = 1 to write 1 = xa+yb
for suitable integers x and y, and then multiply both sides by c. Now stare hard
at your equation!)
Exercise 1.29. The Fibonacci Sequence, 1, 1, 2, 3, 5, 8, 13, is dened as
follows: If a
i
stands for the ith term of this sequence, then a
1
= 1, a
2
= 1,
and for n 3, a
n
is given by the formula a
n
= a
n1
+a
n2
. Prove that for all
n 2, gcd(a
n
, a
n1
) = 1.
Exercise 1.30. Given an integer n 1, recall that n! is the product 1 2
3 (n1) n. Show that the integers (n+1)! +2, (n+1)! +3, . . . , (n+1)! +
(n + 1) are all composite.
Exercise 1.31. Use Exercise 1.30 to prove that given any positive integer n,
one can always nd consecutive primes p and q such that q p n.
Exercise 1.32. If m and n are odd integers, show that 8 divides m
2
n
2
.
Exercise 1.33. Show that 3 divides n
3
n for any integer n. (Hint: Factor
n
3
n as n(n
2
1) = n(n1)(n+1). Write n as 3q +r, where r is one of 0,
1, or 2, and examine, for each value of r, the divisibility of each of these factors
18 CHAPTER 1. DIVISIBILITY IN THE INTEGERS
by 3. This result is a special case of Fermats Little Theorem , which you will
encounter as Theorem 4.42 in Chapter 4 ahead.)
Exercise 1.34. Here is another instance of Fermats Little Theorem : show
that 5 divides n
5
n for any integer n. (Hint: As in the previous exercise, factor
n
5
n appropriately, and write n = 5q +r for 0 r < 5.)
Exercise 1.35. . Show that 7 divides n
7
n for any integer n.
Exercise 1.36. Use Proposition 1.22 to show that the number of positive
divisors of n is (n
1
+ 1)(n
2
+ 1) (n
k
+ 1).
Exercise 1.37. Let m and n be positive integers. By allowing the exponents
in the prime factorizations of m and n to equal 0 if necessary, we may assume
that m = p
m
1
1
p
m
2
2
p
m
k
k
and n = p
n
1
1
p
n
2
2
p
n
k
k
, where for i = 1, , k, p
i
is prime, m
i
0, and n
i
0. (For instance, we can rewrite the factorizations
84 = 2
2
3 7 and 375 = 3 5
3
as 84 = 2
2
3 5
0
7 and 375 = 2
0
3 5
3
7
0
.)
For each i, let d
i
= min(m
i
, n
i
). Prove that gcd(m, n) = p
d
1
1
p
d
2
2
p
d
k
k
.
Exercise 1.38. Given two (nonzero) integers a and b, the least common mul-
tiple of a and b (written as lcm(a, b)) is dened to be the smallest of all the
positive common multiples of a and b.
1. Show that this denition makes sense, that is, show that the set of positive
common multiples of a and b has a smallest element.
2. Retaining the notation of Exercise 1.37 above, let l
i
= max(m
i
, n
i
) (i =
1, . . . , k). Show that lcm(m, n) = p
l
1
1
p
l
2
2
p
l
k
k
.
3. Use Exercise 1.37 and Part 2 above to show that lcm(a, b) = ab/gcd(a, b).
4. Conclude that if if gcd(a, b) = 1, then lcm(a, b) = ab.
Exercise 1.39. Let a = p
n
, where p is a prime and n is a positive integer.
Prove that the number of integers x such that 1 x a and gcd(x, a) = 1 is
p
n
p
n1
.
(More generally, if a is any integer greater than 1, one can ask for the number
of integers x such that 1 x a and gcd(x, a) = 1. This number is denoted
by (a), and is referred to as Eulers -function. It turns out that if a has the
prime factorization p
m
1
1
p
m
2
2
p
m
k
k
, then (a) = (p
m
1
1
) (p
m
2
2
) (p
m
k
k
)!
Delightful as this statement is, we will not delve deeper into it in this book, but
you are encouraged to read about it in any introductory textbook on number
theory.)
19
Exercise 1.40. The series 1+1/2+1/3+ is known as the harmonic series.
This exercise concerns the partial sums (see below) of this series.
1. Fix an integer n 1, and let S
n
denote the set 1, 2, . . . , n Let 2
t
be
the highest power of 2 that appears in S
n
. Show that 2
t
does not divide
any element of S
n
other than itself.
2. For any integer n 1, the nth partial sum of the harmonic series is the
sum of the rst n terms of the series, that is, it is the number 1 +1/2 +
1/3 + 1/n. Show that if n 2, the nth partial sum is not an integer
as follows:
(a) Clearing denominators, show that the nth partial sum may be written
as a/b, where b = n! and a = (2 3 n) +
(2 4 n) + (2 3 5 n) + + (2 3 n 1).
(b) Let S
n
and 2
t
be as in part 1 above. Also, let 2
m
be the highest
power of 2 that divides n!. Show that m t 1 and that m
mt + 1 1.
(c) Conclude from part 2b above that 2
mt+1
divides b.
(d) Use part 1 to show that 2
mt+1
divides all the summands in the
expression in part 2a above for a except the term (2 3 2
t
1
2
t
+ 1 n).
(e) Conclude that 2
mt+1
does not divide a.
(f) Conclude that the nth partial sum is not an integer.
Exercise 1.41. Fix an integer n 1, and let S
n
denote the set 1, 3, 5, . . . , 2n
1. Let 3
t
be the highest power of 3 that appears in S
n
. Show that 3
t
does
not divide any element of S
n
other than itself. Can you use this result to show
that the nth partial sums (n 2) of a series analogous to the harmonic series
(see Exercise 1.40 above) are not integers?
Exercise 1.42. Prove using the unique prime factorization theorem that
2
is not a rational number. Using essentially the same ideas, show that
p is not
a rational number for any prime p. (Hint: Suppose that
and y
such that x
a +y
b = 1!
Notice though that if you know that there exist integers x
and y
such that
x
a+y
b = 1, you can conclude that gcd(a, b) = 1. For 1 has to be the least positive
linear combination of a and b, since there is no positive integer smaller than 1.
Remarks on the denition of the greatest common divisor. We have
dened the greatest common divisor of two nonzero integers a and b to be the largest
of their common divisors (Denition 1.6), and we have noted that gcd(a, b) must
be positive. On the other hand, Corollary 1.10 showed that every common divisor
of a and b must divide gcd(a, b). Putting these together, we nd that gcd(a, b) has
21
the following specic properties:
1. gcd(a, b) is a positive integer.
2. gcd(a, b) is a common divisor of a and b.
3. Every common divisor of a and b must divide gcd(a, b).
You will nd that many textbooks have turned these properties around and have
used these properites to dene the greatest common divisor! Thus, these textbooks
dene the greatest common divisor of a and b to be that integer d which has the
following properties:
1. d is a positive integer.
2. d is a common divisor of a and b.
3. Every common divisor of a and b must divide d.
Of course, it is not immediately clear that such an integer d must exist, nor is it
clear that it must be unique, and these books then give a proof of the existence and
uniqueness of such a d. Furthermore, it is not immediately clear that the integer
d yielded by this denition is the same as the greatest common divisor as we have
dened it (although it will be clear if one takes a moment to think about it). The
reason why many books prefer to dene the greatest common divisor as above is
that this denition applies (with a tiny modication) to other number systems
where the concept of a largest common divisor may not exist.
In the case of the integers, however, we prefer our Denition 1.6, since the
largest of the common divisors of a and b is exactly what we would intuitively
expect gcd(a, b) to be!
22 CHAPTER 1. DIVISIBILITY IN THE INTEGERS
Chapter 2
Rings and Fields
2.1 Rings: Denition and Examples
Abstract algebra begins with the observation that several sets that occur
naturally in mathematics, such as the set of integers, the set of rationals,
the set of 22 matrices with entries in the reals, the set of functions from the
reals to the reals, all come equipped with certain operations that allow one
to combine any two elements of the set and come up with a third element.
These operations go by dierent names, such as addition, multiplication, or
composition (you would have seen the notion of composing two functions in
calculus). Abstract algebra studies mathematics from the point of view of
these operations, asking, for instance, what properties of a given mathemat-
ical set can be deduced just from the existence of a given operation on the
set with a given list of properties. We will be dealing with some of the more
rudimentary aspects of this approach to mathematics in this book.
However, do not let the abstract nature of the subject fool you into
thinking that mathematics no longer deals with concrete objects! Abstrac-
tion grows only from extensive studies of the concrete, it is merely a device
(albeit an extremely eective one) for codifying phenomena that simultane-
ously occur in several concrete mathematical sets. In particular, to under-
23
24 CHAPTER 2. RINGS AND FIELDS
stand an abstract concept well, you must work with the specic examples
from which the abstract concept grew (remember the advice on active learn-
ing).
Let us look at Z, focusing on the operations of addition and multiplica-
tion.
Given a set S, recall that a binary operation on S is a process that takes
an ordered pair of elements from S and gives us a third member of the set.
It is helpful to think of this in more abstract termsa binary operation
on S is just a function f : S S S, that is, a rule that assigns to each
ordered pair (a, b), a third element f(a, b). Given an aribitrary set S, it is
quite easy to dene binary operations on it, but it is much harder to dene
binary operations that satisfy additional properties.
Question 2.1. How many dierent binary operations can be de-
ned on the set 0, 1? Now select some of these binary operations
and check whether they are associative or commutative. How many
binary operations can be constructed on a set T that has n elements?
What will be crucial to us is that addition and multiplication are special
binary operations on Z that satisfy certain extra properties.
First, why are addition and multiplication binary operations? The pro-
cess of adding two integers is of course familiar to us, but suppose we view
addition abstractly as a rule that assigns to each ordered pair of integers
(m, n) the integer m+n. (For instance, addition assigns to the ordered pair
(2, 3) the integer 5, to the ordered pair (3, 4) the integer 1, to the ordered
pair (1, 0) the integer 1, etc.) It is clear then that addition is indeed a binary
operationit takes an ordered pair of integers, namely (m, n), and gives us
a third uniquely determined integer, namely m+n. Similarly, multiplication
too is a binary operationit is a rule that assigns to every ordered pair of
integers (m, n) the uniquely determined integer m n.
What are the properties of these binary operations? Let us consider
addition rst. It is customary to write (Z, +) to emphasize the fact that we
are considering Z not just as a set of objects, but as a set with the binary
2.1. RINGS: DEFINITION AND EXAMPLES 25
operation of addition. (We will temporarily ignore the fact that Z has a
second binary operation, namely multiplication, dened on it.) The rst
property that (Z, +) has is that + is associative. That is, for all integers a,
b, and c, (a+b) +c = a+(b +c). The second property that (Z, +) has is the
existence of an identity element with respect to +. This is the integer 0it
satises the condition a + 0 = 0 + a = a all integers a. The third property
of (Z, +) is the existence of inverses with respect to +. For every integer a,
there is an integer b (depending on a) such that a + b = b + a = 0. (It is
clear what this integer b is, it is just the integer a.)
What these observations show is that the integers form a group with
respect to addition. We will study groups in detail in Chapter 4 ahead,
but let us introduce the concept here. It turns out that the situation we
have encountered above (namely, a set equipped with a binary operation
with certain properties) arises in several dierent areas of mathematics.
Precisely because the same situation appears in so many dierent contexts,
it has been given a name and has been studied extensively as a subject in
its own right.
Denition 2.2. A group is a set S with a binary operation : S S S
such that
1. is associative, i.e., a (b c) = (a b) c for all a, b, and c in S,
2. S has an identity element with respect to , i.e., an element id such
that a id = id a = a for all a in S, and
3. every element of S has an inverse with respect to , i.e., for every element
a in S there exists an element a
1
such that a a
1
= a
1
a = id.
To emphasize that there are two ingredients in this denitionthe set S and
the operation with these special propertiesthe group is sometimes written
as (S, ), and S is often referred to as a group with respect to the operation .
The reason that the integers form a group with respect to addition is
that if we take the set S of this denition to be Z, and if we take the
binary operation to be +, then the three conditions of the denition are
26 CHAPTER 2. RINGS AND FIELDS
met. There is a vast and beautiful theory about groups, the beginnings of
which we will pursue in Chapter 4 ahead.
Observe that there is one more property of addition that we have not
listed yet, namely commutativity. This is the property that for all integers
a and b, a + b = b + a. In the language of group theory, this makes (Z, +)
an abelian group:
Denition 2.3. An abelian group is one in which the function in Denition
2.2 above satises the additional condition a b = b a for all a and b in S.
Commutativity of addition is a crucial property of the integers; the only
reason we delayed introducing it was to allow us rst to introduce the notion
of a group.
Now let us consider multiplication. As with addition, we write (Z, )
to emphasize the fact that we are considering Z as a set with the binary
operation of multiplication, temporarily ignoring the operation addition.
As with addition, we nd that multiplication is associative, that is, for all
integers a, b, and c, (a b) c = a (b c). Also, Z has an identity with respect
to multiplication. This is the integer 1; it satises a 1 = 1 a = a for all
integers a.
Question 2.4. Is (Z, ) a group? In other words, do the integers
form a group with respect to multiplication? To answer this ques-
tion, you would check whether the three group axioms above hold
for (Z, ). What is the inverse with respect to multiplication of 1?
What is the inverse of 2? What is the inverse of 0?
There are two more properties of multiplication of integers we wish to
consider. The rst is that multiplication is commutative, that is, a b =
b a for all integers a and b. The second, which is not a property of just
multiplication alone, but rather a property that connects multiplication and
addition together, is the distributivity of multiplication over addition, that
is, for all integers a, b, and c, a (b + c) = a b + a c, and (a + b) c =
a c +b c. (Notice that since multiplication of integers is commutative, the
second relation in the previous sentence follows from the rst!)
2.1. RINGS: DEFINITION AND EXAMPLES 27
There are other properties of these operations of course (for instance
a b = 0 implies that either a = 0 or b = 0), but we will study these
later. Let us meanwhile reect on the properties that we have considered
so far. Studying them closely, one gets the sense that these properties are
somehow rather natural. For instance, if one were to think of the integers
as (intellectual) counting tools, then it is clear that addition must necessarily
be commutative, since commutativity of addition corresponds to the fact
that if you have a certain number of objects in one pile and a certain number
in another, then the total number of objects can be obtained either by
counting all the objects in the rst pile and then all the objects in the
second pile, or by counting all the objects in the second pile and then all
the objects in the rst pile.
This sense of these properties being natural is further reinforced when
we consider other number systems that we encounter in mathematics. For
instance, consider the set of all polynomials in one variable whose coecients
are real numbers, a set with which you are already very familiar. (The real
numbers are traditionally denoted by R, and the set of all polynomials in
one variable whose coecients are real numbers is traditionally denoted by
R[x].) This set, too, is more than just a collection of objects. Just as
with the integers, R[x] has two binary operations, also called addition and
multiplication. Recall that given two polynomials g(x) =
n
i=0
g
i
x
i
and h(x) =
m
j=0
h
i
x
i
, we add g and h by adding together the coecients of the same
powers of x, and we multiply g and h by multiplying each monomial g
i
x
i
of
g by each monomial h
j
x
j
of h and adding the results together. (For instance,
(1+x+x
2
) +(x+
3x
3
) is 1+2x+x
2
+
3x
3
, and (1+x+x
2
) (x+
3x
3
)
is x + x
2
+ (1 +
3)x
3
+
3x
4
+
3x
5
.) Furthermore, it is our experience
that these binary operations on R[x] satisfy the very same properties above
that the corresponding operations on Z satised.
It turns out that these properties of addition and multiplication are
shared not just by Z and R[x], but by a whole host of number systems
28 CHAPTER 2. RINGS AND FIELDS
in mathematics. Because of the importance of such sets with two binary
operations with these special properties, there is a special term for them
they are called rings.
Denition 2.5. A ring is a set R with two binary operations + and such
that
1. a + (b +c) = (a +b) +c for all a, b, c in R.
2. There exists an element in R, denoted 0, such that a + 0 = 0 +a = a
for all a in R.
3. For each a in R there exists an element in R, denoted a, such that
a + (a) = (a) +a = 0.
4. a +b = b +a for all elements a, b in R.
5. a (b c) = (a b) c for all elements a, b, c in R.
6. There exists an element in R, denoted 1, such that a 1 = 1 a = a for
all a in R.
7. a (b +c) = a b +a c and (a +b) c = a c +b c for all elements a, b,
c in R.
Remark 2.6. The binary operation + is usually referred to as addition and
the binary operation is usually referred to as multiplication, in keeping
with the terminology for the integers and other familiar rings. As is the
usual practice in high school algebra, one often suppresses the multiplication
symbol, that is, one often writes ab for a b.
Remark 2.7. Just as we did earlier with the integers, if we temporarily ignore
the operation on R and write (R, +) to indicate that we are focusing on
just the operation +, then the rst four conditions in the denition of the
ring R show that (R, +) is an abelian group.
Remark 2.8. We have used the term number system at several places in
the book without really being explicit about what a number system is. We
did not have the language before this point to make our meaning precise,
but what we had intended to convey loosely by this term is the concept
2.1. RINGS: DEFINITION AND EXAMPLES 29
of a set with two binary operations with properties much like those of the
integers. But now that we have the language, let us be precise: a number
system is just a ring as dened above!
It must be borne in mind however that number system is a nonstan-
dard term: it is not used very widely, and when used at all, dierent authors
mean dierent things by the term! So it is better to stick to rings, which
is standard.
Observe that we left out one important property of the integers in our
denition of a ring, namely the commutativity of multiplication. And corre-
spondingly, we have included both left distributivity (a (b +c) = a b+a c)
and right distributivity ((a+b)c = ac+bc) of multiplication over addition.
While this may seem strange at rst, think about the set of 2 2 matrices
with entries in R. Convince yourselves that this is a ring with respect to the
usual denitions of matrix addition and multiplicationsee Example 2.16
ahead. Yet, even in this extremely familiar number system, multiplication
is not commutative; for instance,
_
1 0
0 0
_
_
0 1
0 0
_
,=
_
0 1
0 0
_
_
1 0
0 0
_
.
Rings in which multiplication is not commutative are fairly common in
mathematics, and hence requiring commutativity of multiplication in the
denition of a ring would be too restrictive. On the other hand, there is
no denying that a signicant proportion of the rings that we come across
indeed have multiplication that is commutative. Thus, it is reasonable to
single them out as special cases of rings, and we have the following:
Denition 2.9. A commutative ring is a ring R in which a b = b a for all
a and b in R.
(Rings in which the multiplication is not commutative are referred to as
noncommutative rings.)
The following are various examples of rings. (Once again, recall the ad-
vice in the preliminary chapter To the Student, page ix, on reading actively.)
30 CHAPTER 2. RINGS AND FIELDS
Example 2.10. The set of rational numbers, Q, with the usual operations of
addition and multiplication forms a ring. We know how to add and multiply
two rational numbers very well, and we know that all the ring axioms hold
for the rationals. (One can take a more advanced perspective and prove that
the ring axioms hold for the rationals, starting from the fact that they hold
for the integers. Although sound, such an approach is unduly technical for
a rst course.) Q is, in fact, a commutative ring.
Question 2.10.1. Q has one crucial property (with respect to
multiplication) that Z does not have. Can you discover what that
might be? (See the remarks on page 86 in the notes, but only after
you have thought about this question on your own!)
Example 2.11. In a like manner, both the reals, R, and the complexes,
usually denoted C, are rings under the usual operations of addition and
multiplication. Again, we will not try to prove that the ring axioms hold;
we will just invoke our intimate knowledge of R and C to recognize that
they are rings.
Example 2.12. Let Q[
2, 1/7 + (1/5)
2 and c + d
2 and c + d
2].
Example 2.13. Now let us generalize Example 2.12 above. Let m be any
rational number. Note that if m is negative,
m will not be a real number
but a complex (non-real) number. Let Q[
m] re-
duce to if m is the square of a rational number?
Question 2.13.2. More generally, compare the sets Q[
m] and
Q[
] when m and m
for some
rational number q. Are these the same sets?
Question 2.13.3. Under the usual addition and multiplication of
complex numbers, does Q[
1
Exercise 2.14.1. Show that if a and b are real numbers, then
a + b = 0 if and only if both a and b are zero. (See the notes on
page 87 for a clue.)
32 CHAPTER 2. RINGS AND FIELDS
Example 2.15. Consider the set of rational numbers q that have the prop-
erty that when q is written in the reduced form a/b with a, b integers and
gcd(a, b) = 1 the denominator b is odd. This set is usually denoted by Z
(2)
,
and contains elements like 1/3, 5/7, 6/19, etc., but does not contain 1/4
or 5/62.
Question 2.15.1. Does Z
(2)
contain 2/6?
Notice that every element of Z
(2)
is just a fraction (albeit of a particular
kind). We know how to add and multiply two fractions together, so we can
use this knowledge to add and multiply any two elements of Z
(2)
. Here is
the punch line: Z
(2)
forms a ring under the usual operation of addition and
multiplication of fractions! Strange as this ring may seem at rst, it plays
an important role in number theory.
Question 2.15.2. Check that if you add (or multiply) two fractions
in Z
(2)
you get a fraction that is not an arbitrary rational number
but one that also lives in Z
(2)
. What role does the fact that the
denominators are odd play in ensuring this? (The role of the odd
denominators is rather crucial; make sure that you understand it!)
Question 2.15.3. Why do associativity and distributivity follow
from the fact that Z
(2)
Q?
Question 2.15.4. Do the other ring axioms hold? Check!
Question 2.15.5. Can you generalize this construction to other
subsets of Q where the denominators have analogous properties?
(See the notes on page 87 for some comments.)
Example 2.16. The set of nn matrices with entries in R (M
n
(R)), where
n is a positive integer, forms a ring with respect to the usual operations
of matrix addition and multiplication. For almost all values of n, matrix
multiplication is not commutative.
Question 2.16.1. What is the exception?
2.1. RINGS: DEFINITION AND EXAMPLES 33
Checking associativity of addition and multiplication and the distribu-
tivity of multiplication over addition is tedious, but you should check at
least one of them so as to be familiar with the process.
Exercise 2.16.1. For example, prove that for any three matrices
A, B, and C, (A+B) +C = A+ (B +C).
What is important is that you get a feel for how associativity and dis-
tributivity in M
n
(R) derives from the fact that associativity and distribu-
tivity hold for R.
Question 2.16.2. What about the ring axioms other than asso-
ciativity and distributivity: do they hold?
Question 2.16.3. What are the additive and multiplicative iden-
tities?
Question 2.16.4. Let e
i,j
denote the matrix with 1 in the (i, j)-th
slot and 0 everywhere else. Study the case of 2 2 matrices and
guess at a formula for the product e
i,j
e
k,l
. (You need not try to
prove formally that your formula is correct, but after you have made
your guess, substitute various values for i, j, k, and l and test your
guess.)
Question 2.16.5. Would the ring axioms still be satised if we
only considered the set of n n matrices whose entries came from
Q? From Z?
Question 2.16.6. Now suppose R is any ring. Let us consider
the set M
n
(R) of n n matrices with entries in R with the usual
denitions of matrix addition and multiplication. Is M
n
(R) with
these operations a ring? What if R is not commutative? Does this
aect whether M
n
(R) is a ring or not?
(See the notes on page 88 for some hints.)
Example 2.17. R[x], the set of polynomials in one variable with coecients
from R, forms a ring with respect to the usual operations of polynomial
addition and multiplication. (We have considered this before.) Here, x
denotes the variable. Of course, one could use any letter to represent the
34 CHAPTER 2. RINGS AND FIELDS
variable. For instance, one could refer to the variable as t, in which case
the set of polynomials with coecients in R would be denoted by R[t].
Sometimes, to emphasize our choice of notation for the variable, we refer to
R[x] as the set of polynomials in the variable x with coecients in R, and
we refer to R[t] as the set of polynomials in the variable t with coecients in
R. Both R[x] and R[t], of course, refer to the same set of objects. Likewise,
we often write f(x) (or f(t)) for a polynomial, rather than just f, to
emphasize that the variable is x (or t).
If f(x) = a
0
+ a
1
x + a
2
x
2
+ is a nonzero polynomial in R[x], the
degree of f(x) is the largest value of n for which a
n
,= 0, a
n
x
n
is known
as the highest term, and a
n
is known as the highest coecient. Thus, the
polynomials of degree 0 are precisely the nonzero constants. Polynomials
of degree 1 are called linear, polynomials of degree 2 are called quadratic,
polynomials of degree 3 are called cubic, and so on. Note that we have not
dened the degree of the zero polynomial. This is on purposeit will be
convenient for the formulation of certain theorems if the zero polynomial
does not have a degree!
It is worth recalling an elementary property of polynomials that we will
use frequently (in fact, in a more formal treatment of polynomials, this fact
is built into the denitions of polynomials): two polynomials are equal if
and only if their coecients are equal. That is,
f
i
x
i
=
g
i
x
i
if and only
if f
i
= g
i
(i = 0, 1, . . . ). In particular, a polynomial
f
i
x
i
equals 0 if and
only if each f
i
= 0.
Exercise 2.17.1. Now just as with Example 2.16, prove that if
f, g, and h are any three polynomials in R[x], then (f + g) + h =
f + (g + h). Your proof should invoke the fact that associativity
holds in R.
Example 2.18. Instead of polynomials with coecients from R, we can
consider polynomials in the variable x with coecients from an arbitrary ring
R, with the usual denition of addition and multiplication of polynomials.
We get a ring, denoted R[x]. Thus, if we were to consider polynomials in
2.1. RINGS: DEFINITION AND EXAMPLES 35
the variable x whose coecients are all integers, we get the ring Z[x].
Question 2.18.1. As always, convince yourself that for a general
ring R, the set of polynomials R[x] forms a ring. For arbitrary R, is
R[x] commutative?
(See the notes on page 88 for some hints and more remarks.)
Example 2.19. Generalizing Example 2.17, the set R[x, y] of polynomials
in two variables x and y, forms a ring. A polynomial in x and y is of the form
i,j
f
i,j
x
i
y
j
. (For example, consider the polynomial 4+2x+3y+x
2
y+5xy
3
here, f
0,0
is the coecient of x
0
y
0
, i.e., the coecient of 1, so f
0,0
= 4.
Similarly, f
1,3
is the coecient of x
1
y
3
, so it equals 5. On the other hand,
f
1,1
is zero, since there is no xy term.) Two polynomials
i,j
f
i,j
x
i
y
j
and
i,j
g
i,j
x
i
y
j
are equal if and only if for each pair (i, j), f
i,j
= g
i,j
.
In the same manner, we can consider R[x
1
, . . . , x
n
], the set of polynomials
in n variables x
1
, . . . , x
n
with coecients in R. These too form a ring.
More generally, if R is any ring we may consider R[x
1
, . . . , x
n
], the set of
polynomials in n variables x
1
, . . . , x
n
with coecients in R. Once again,
we get a ring.
Example 2.20. Here is a ring with only two elements! Divide the integers
into two sets, the even integers and the odd integers. Let [0]
2
denote the
set of even integers, and let [1]
2
denote the set of odd integers. (Notice that
[0]
2
and [1]
2
are precisely the equivalence classes of Z under the equivalence
relation dened by a b i ab is even.) Denote by Z/2Z the set [0]
2
, [1]
2
.
Each element of [0]
2
, [1]
2
is itself a set containing an innite number of
integers, but we will ignore this fact. Instead, we will view all the even
integers together as one number of Z/2Z, and we will view all the odd
integers together as another number of Z/2Z. How should we add and
multiply these new numbers? Recall that if we add two even integers we get
an even integer, if we add an even and an odd integer we get an odd integer,
and if we add two odd integers we get an even integer. This suggests the
following addition rules in Z/2Z:
36 CHAPTER 2. RINGS AND FIELDS
+ [0]
2
[1]
2
[0]
2
[0]
2
[1]
2
[1]
2
[1]
2
[0]
2
(There is an obvious way to interpret this table: if you want to know what
a + b is, you go to the cell corresponding to row a and column b.)
Similarly, we know that the product of two even integers is even, the product
of an even integer and an odd integer is even, and the product of two odd
integers is odd. This gives us the following multiplication rules:
[0]
2
[1]
2
[0]
2
[0]
2
[0]
2
[1]
2
[0]
2
[1]
2
Later in this chapter (see Example 2.83 and the discussions preceding
that example), we will interpret the ring Z/2Z dierently: as a quotient
ring of Z. This interpretation, in particular, will prove that Z/2Z is indeed
a ring under the given operations. Just accept for now the fact that we get
a ring, and play with the it to develop a feel for it.
Question 2.20.1. How would you get a ring with three elements
in it? With four?
Example 2.21. Here is the answer to the previous two questions! We have
observed that [0]
2
and [1]
2
are just the equivalence classes of Z under the
equivalence relation a b i a b is even. Analogously, let us consider
the equivalence classes of Z under the equivalence relation aRb i a b is
divisible by 3. Since a b is divisible by 3 exactly when a and b each leaves
the same remainder when divided by 3, there are three equivalence classes:
(i) [0]
3
, the set of all those integers that yield a remainder of 0 when you
divide them by 3. In other words, [0]
3
consists of all multiples of 3, that is,
all integers of the form 3k, k Z. (ii) [1]
3
for the set of all those integers
that yield a remainder of 1, so [1]
3
consists of all integers of the form 3k +1,
k Z. (iii) [2]
3
for the set of all those integers that yield a remainder of 2,
2.1. RINGS: DEFINITION AND EXAMPLES 37
so [2]
3
consists of all integers of the form 3k +2, k Z. Write Z/3Z for the
set [0]
3
, [1]
3
, [2]
3
. Just as in the case of Z/2Z, every element of this set is
itself a set consisting of an innite number of integers, but we will ignore
this fact. How would you add two elements of this set? In Z/2Z, we dened
addition using observations like an odd integer plus an odd integer gives
you an even integer. The corresponding observations here are an integer
of the form 3k+1 plus another integer of the form 3k+1 gives you an integer
of the form 3k + 2, an integer of the form 3k + 1 plus another integer of
the form 3k +2 gives you an integer of the form 3k, an integer of the form
3k + 2 plus another integer of the form 3k + 2 gives you an integer of the
form 3k + 1, etc. We thus get the following addition table:
+ [0]
3
[1]
3
[2]
3
[0]
3
[0]
3
[1]
3
[2]
3
[1]
3
[1]
3
[2]
3
[0]
3
[2]
3
[2]
3
[0]
3
[1]
3
Exercise 2.21.1. Similarly, study how the remainders work out
when we multiply two integers. (For instance, we nd that an
integer of the form 3k + 2 times an integer of the form 3k + 2
gives you an integer of the form 3k +1, etc.) Derive the following
multiplication table:
[0]
3
[1]
3
[2]
3
[0]
3
[0]
3
[0]
3
[0]
3
[1]
3
[0]
3
[1]
3
[2]
3
[2]
3
[0]
3
[2]
3
[1]
3
This process can easily be generalized to yield a ring with n elements
(Z/nZ) for any n 2.
Exercise 2.21.2. Construct the addition and multiplication tables
for the ring Z/4Z.
Example 2.22. Suppose R and S are two rings. (For example, take R =
Z/2Z, and take S = Z/3Z.) Consider the Cartesian product T = R S,
38 CHAPTER 2. RINGS AND FIELDS
which is the set of ordered pairs (r, s) with r R and s S. Dene addition
in T by (r, s)+(r
, s
) = (r+r
, s+s
). Here, r+r
, s
) = (rr
, ss
).
Once again, rr
2 where a
and b are integers is a subring of Q[
2].
Example 2.32. The set of all complex numbers of the form a + bi where
a and b are integers is a subring of Q[]. It is denoted by Z[]. (It is often
called the ring of Gaussian integers.)
Example 2.33. Let Z[1/2] denote the set of all rational numbers that are
such that when written in the reduced form a/b with gcd(a, b) = 1, the
44 CHAPTER 2. RINGS AND FIELDS
denominator b is a power of 2. (Contrast this set with Z
(2)
.) This is a
subring of Q.
Question 2.33.1. What are the rational numbers that this ring
has in common with Z
(2)
?
(See the notes on page 90 for clues.)
Example 2.34. Let Q[
2,
2 + c
3 + d
2 +c
.
Question 2.48. If F is a eld, is F
2].
Question 2.51.1. Q[
2 as c +d
2] a eld?
Example 2.52. The complex numbers, C.
Question 2.52.1. What is the inverse of the nonzero number a +
b? (Give the inverse as c + d for suitable real numbers c and d:
think in terms of real-izing denominators.)
Example 2.53. Q[].
Question 2.53.1. Why is Q[] a eld?
Question 2.53.2. Is Z[] a eld?
Example 2.54. Here is a new example: the set of rational functions with
coecients from the reals, R(x). (Note the parentheses around the x.) This
is the set of all quotients of polynomials with coecients from the reals,
that is, the set
_
f(x)
g(x)
_
, where f(x) and g(x) are elements of R[x], and
g(x) ,= 0. (Of course, we take f(x)/g(x) = f
(x)/g
(x) if f(x)g
(x) =
g(x)f
(x)/g
(x) if f(x)g
(x) = g(x)f
and b b
, then a + b a
+ b
and
a b a
.
Exercise 2.59.2. Show that the zero in this ring is [0]
p
, and the 1
in this ring is [1]
p
. (In particular, [a]
p
is nonzero in Z/pZ precisely
when a is not divisible by p.)
Exercise 2.59.3. Now let [a]
p
be a nonzero element in Z/pZ.
Show that [a]
p
is invertible. (Hint: Invoking the fact that a and p
are relatively prime, we nd that there must exist integers x and y
such that xa +yp = 1. So?)
Exercise 2.59.4. Now conclude using Exercise 2.47 and Exercise
2.59.3 above that Z/pZ is a eld.
We end this section with the concept of a subeld. The idea is very
simple (compare with Denition 2.27 above):
Denition 2.60. A subset F of a eld K is called a subeld of K if F is a
subring of K and is itself a eld. In this situation, we also describe K as a eld
extension of F, and refer to F and K jointly as the eld extension K/F.
The dierence between being a subring of K and a subeld of K is
as follows: Suppose R is a subring of K. Given a nonzero element a in
R, its multiplicative inverse 1/a certainly exists in K (why?). However,
1/a may not live inside R. If 1/a happens to live inside R, we say that a
has a multiplicative inverse in R itself. Now, if every nonzero a in R has
a multiplicative inverse in R itself, then by Denition 2.46 (why is R an
52 CHAPTER 2. RINGS AND FIELDS
integral domain?), R is a eld. Therefore, by Denition 2.60 above, R is
then a subeld of K.
Thus, Q is a subeld of R, but Z is only a subring of R; it is not a
subeld of R. Similarly, R is a subeld of C. (Is R a subring of C?) Q[
2]
is a subeld of C. In fact, more is trueQ is a subeld of Q[
2], which in
turn is a subeld of R, which in turn is a subeld of C.
Question 2.61. By contrast, is Q[i] a subeld of R? Of C?
Question 2.62. Is Z[i] a subeld of R? Of C?
(See Exercise 2.128 at the end of the chapter for a situation in which we
can conclude that a subring of a eld must actually be a subeld.)
2.4 Ideals
Consider the ring Z, and consider the subset of even integers, denoted (sug-
gestively) 2Z. The set 2Z is closed under addition (the sum of two even
integers is again an even integer), and in fact, (2Z, +) is even an abelian
group (this is because (i) 0 is an even integer and hence in 2Z, (ii) for any
even integer a, a is also an even integer and hence in 2Z, (iii) and of course,
addition of integers, restricted to 2Z is both an associative and commuta-
tive operation). Moreover, the set 2Z has one extra property that will be of
interest: for any integer a 2Z and for any arbitrary integer m, am is also
an even integer and hence in 2Z. Subsets such as these play a crucial role
in the structure of rings, and are given a special name: they are referred to
as ideals.
Denition 2.63. Let R be a ring. A subset I of R is called an ideal of R
if I is closed under the addition operation of R and under this induced binary
operation (I, +) is an abelian group, and if for any i I and arbitrary r R,
both ri I and ir I. An ideal I is called proper if I ,= R.
2.4. IDEALS 53
Remark 2.64. Of course, if R is commutative, as in the example of Z and
2Z above, ri I if and only if ir I, but in an arbitrary ring, one must
specify in the denition that both ri and ir be in I.
Remark 2.65. Notice in the denition of ideals above that if ir I for all r
R, then in particular, taking r to come from I, we nd that I must be closed
under multiplication as well, that is, for any i and j in I, ij must also be in
I. Once we nd that ideals are closed under multiplication, the associative
and distributive laws will then be inherited from R, so ideals seem like
they should be the same as subrings. However, they dier from subrings in
one crucial aspectideals do not have to contain the multiplicative identity
1. (Recall the denition of subrings, and see the example of 2Z aboveit
certainly does not contain 1.)
Exercise 2.66. Show that if I is an ideal of a ring R, then 1 I
implies I = R.
Here is an alternative characterization of ideals:
Lemma 2.67. Let I be a subset of a ring R. Then I is an ideal of R if and
only if
1. I is nonempty,
2. I is closed under addition, and
3. for all i I and r R, both ir and ri are in I.
Proof. If I is an ideal of R, then by denition, I is closed under addition,
and for all i I and r R, both ir and ri are in I. Moreover, by denition
of being an ideal, (I, +) is an abelian group, so it has at least one element
(the identity element). This shows that I is nonempty.
Now assume that I is nonempty, closed under addition, and for all i I
and r R, both ir and ri are in I. Since I is nonempty, there exists at
least one element in I, call it a. Then, by the hypotheses, a 0 = 0 must
be in I. Also, for any i I, i (1) = i I. Since commutativity and
54 CHAPTER 2. RINGS AND FIELDS
associativity of addition in I follows from that in R, we nd that indeed
(I, +) is an abelian group. 2
Exercise 2.68. If I is an ideal of R, then by denition, (I, +) is
an abelian group. Consequently, it has an identity element, call it
0
I
, that satises the property that i + 0
I
= 0
I
+i = i for all i I.
On the other hand, the element 0 in R is the identity element for
the group (R, +). Prove that the element 0
I
must be the same as
the element 0.
(See Exercise 4.22 in Chapter 4 ahead.)
The signicance of ideals will become clear when we study quotient rings
and ring homomorphisms a little ahead, but rst let us consider several
examples of ideals in rings:
Example 2.69. Convince yourselves that if R is any ring, then both R and
the set 0 are both ideals of R. The ideal 0 is often referred to informally
as the zero ideal.
Example 2.70. Just as with the set 2Z, we may consider, for any integer
m, the set of all multiples of m, denoted mZ.
Exercise 2.70.1. Prove that mZ is an ideal of Z.
Question 2.70.1. What does mZ look like when m = 1?
Question 2.70.2. What does mZ look like when m = 0?
Example 2.71. In the ring R[x], let x denote the set of all polynomials
that are a multiple of x, i.e. the set xg(x) [ g(x) R[x].
Exercise 2.71.1. Prove that x is an ideal of R[x].
Exercise 2.71.2. More generally, let f(x) be an arbitrary poly-
nomial, and let f(x) denote the set of all polynomials that are a
multiple of f(x), i.e. the set f(x)g(x) [ g(x) R[x]. Show that
f(x) is an ideal of R[x].
2.4. IDEALS 55
Example 2.72. In the ring R[x, y], let x, y denote the set of all polyno-
mials that can be expressed as xf(x, y) + yg(x, y) for suitable polynomials
f(x, y) and g(x, y). For example, the polynomial x + 2y + x
2
y + xy
3
is in
x, y because it can be rewritten as x(1 +xy) +y(2 +xy
2
). (Note that this
rewrite is not uniqueit can also be written as x(1 +xy +y
3
) +2ybut this
will not be an issue.)
Exercise 2.72.1. Show that x, y is an ideal of R[x, y].
Exercise 2.72.2. More generally, given two arbitrary polynomials
p(x, y) and q(x, y), let p(x, y), q(x, y) denote the set of all polyno-
mials that can be expressed as p(x, y)f(x, y)+q(x, y)g(x, y) for suit-
able polynomials f(x, y) and g(x, y). Show that p(x, y), q(x, y) is
an ideal of R[x, y].
Example 2.73. Fix an integer n 1. In the ring M
n
(Z) (see Exercise
2.16.5), the subset M
n
(2Z) consisting of all matrices all of whose entries are
even, is an ideal.
Exercise 2.73.1. Prove this.
Question 2.73.1. Given an arbitrary integer m, is the subset
M
n
(mZ) consisting of all matrices all of whose entries are a multiple
of m an ideal of M
n
(Z)?
Example 2.74. Let R be an arbitrary ring, and let I be an ideal of R. Fix
an integer n 1. In M
n
(R), let M
n
(I) denote the subset of all matrices all
of whose entries come from I.
Exercise 2.74.1. Prove that M
n
(I) is an ideal of M
n
(R).
Example 2.75. In the ring Z
(2)
, denote by 2
(2)
the set of all fractions of
the (reduced) form a/b where b is odd and a is even.
Question 2.75.1. Study Example 2.15 carefully. Is 2
(2)
a proper
subset of Z
(2)
?
Exercise 2.75.1. Prove that 2
(2)
is an ideal of Z
(2)
.
56 CHAPTER 2. RINGS AND FIELDS
Example 2.76. Let R and S be rings, and let I
1
be an ideal of R and I
2
an ideal of S. Let I
1
I
2
denote the set (a, b) [ a I
1
, b I
2
.
Exercise 2.76.1. Prove that I
1
I
2
is an ideal of R S.
(See Exercise 2.129 ahead.)
Example 2.77. For simplicity, we will restrict ourselves in this example to
commutative rings. First, just to point out terminology that we have already
introduced in Example 2.71, by a multiple of r in a general commutative ring
R, we mean the set ra [ a R. (This obviously generalizes the notion
of multiple that we use in Z.) In Examples 2.70 and 2.71, we considered
the set of all multiples of a given element of our ring (multiples of m in the
case of Z, multiples of f(x) in the case of R[x]), and observed that these
formed an ideal. In Example 2.72, we considered something more general:
the set p(x, y), q(x, y) is the set of sums of multiples of p(x, y) and q(x, y).
This process can be generalized even further. If a
1
, . . . , a
n
are elements of
a commutative ring R, we denote by a
1
, . . . , a
n
the set of all elements of R
that are expressible as a
1
r
1
+ +a
n
r
n
for suitable elements r
1
, . . . , r
n
in
R. Thus, the elements of a
1
, . . . , a
n
are sums of multiples of the a
i
. (As
in Example 2.72, the r
i
may not be uniquely determined, but this will not
be an issue.)
Exercise 2.77.1. Show that a
1
, . . . , a
n
is an ideal of R.
The ideal a
1
, . . . , a
n
is known as the ideal generated by a
1
, . . . , a
n
.
An ideal generated by a single element is known as a principal ideal. Thus,
the ideal 2Z is a principal ideal in Z. (Of course, the ideal 2Z could just as
easily have been denoted by 2.) See Exercise 2.130 ahead.
Exercise 2.77.2. Show that a
1
, . . . , a
n
is the smallest ideal con-
taining a
1
, . . . , a
n
, in the sense that if J is any ideal of R that
contains a
1
, . . . , a
n
, then a
1
, . . . , a
n
J.
2.5. QUOTIENT RINGS 57
Question 2.77.1. Convince yourselves that 1 = R and 0 is
just the zero ideal 0.
Exercise 2.77.3. Suppose that R is a eld, and let a be a nonzero
element of R. Show that a = R. (Hint: play with the fact that
a
1
exists in R and that a is an ideal.)
Exercise 2.77.4. Conclude that the only ideals in a eld F are
the set 0 and F.
2.5 Quotient Rings
We now come to a fundamental method of constructing a new ring from a
given ring and an ideal in the ring, namely, the quotient ring construction.
Let R be a ring (not necessarily commutative) and let I be an ideal in R.
We dene a relation on R by declaring a b if and only if a b I.
It is immediate that is an equivalence relation:(i) certainly, for any a,
a a = 0 I; (ii) if a b, then by denition a b I, but since I is an
ideal, 1(a b) = b a I, so b a as well; (iii) nally, if a b and b c,
then by denition, a b I and b c I, so again because I is an ideal,
(a b) + (b c) = a c I, so a c as well.
Let us denote the equivalence class of an element a as [a]. (Recall what
this means: it is the set of all elements in R that are related to a under this
equivalence relation.) Let us also denote by a + I the set of all elements of
the ring of the form a + i as i varies in I. The set denoted a + I is called
the coset of I with respect to a. We have the following:
Lemma 2.78. The equivalence class [a] is precisely the coset a +I.
Proof. Take b [a]. Then b a, so by denition, b a I. Thus, b a = i
for some i I, or written dierently, b = a +i. Thus, b a +I, and since b
was arbitrary, we nd [a] a + I. Conversely, take any element b a + I.
Then by denition of the set a+I, we nd b = a+i for some i I. But this
just means b a I, that is b a. Thus, b [a] and since b was arbitrary,
we nd a +I [a]. This proves that the two sets are equal.
58 CHAPTER 2. RINGS AND FIELDS
2
Let us write R/I (R mod I) for the set of equivalence classes of R under
the relation above. Thanks to Lemma 2.78 we know that the equivalence
class of r R is the same as the coset r +I, so we will use the notation [r]
and r+I interchangeably for the equivalence class of r. The key observation
we make is that the set R/I can be endowed with two binary operations +
(addition) and (multiplication) by the following rather natural denitions:
Denition 2.79. [a] + [b] = [a + b] and [a] [b] = [a b] for all [a] and [b] in
R/I. (In coset notation, this would read (a + I) + (b + I) = (a + b) + I, and
(a +I)(b +I) = ab +I.) As always, if the context is clear, we will often omit
the sign and write [a][b] for [a] [b].
Before proceeding any further, we need to settle the issue of whether
these denitions make sense, in other words, whether these operations are
well-dened. Observe that the denition of addition, for instance, depends
on which representative we use for the equivalence classes. Now recall that
if a
a, then [a] = [a
]. Similarly, if b
b, then [b] = [b
]. If we use a and
b as representatives for the equivalence classes to which they belong, our
denition of the sum of the two classes is the class to which a + b belongs.
However, if we use a
and b
+ b
belongs. Can we be certain that the class to which a+b belongs is the same
as the class to which a
+b
a and b
b. Then,
by denition, a
+b
+b
) (a+b)
is in I, that is, a
+ b
+b
ab I, or put dierently,
[a
2] that sends x to
2, so f sends p(x) to
the element a
0
+ a
1
2 + a
2
(
2)
2
+ a
k
(
2)
k
. Of course, this horrible
expression simplies into one of the form a + b
2)
2
= 2, (
2)
3
= 2
2, etc.)
Exercise 2.99.1. Prove that f is a ring homomorphism.
Exercise 2.99.2. Prove that f is surjective.
(Hint: Given rationals a and b what is the image of a +bx?)
Let us determine the kernel of this homomorphism. Since x
2
goes to 2,
x
2
2 is certainly in the kernel. Since the kernel is an ideal of Q[x], the
set of multiples of x
2
2 (which is the principal ideal denoted x
2
2, see
Example 2.77), will also be in the kernel. We will show that there are no
other elements in the kernel, that is, ker(f) = x
2
2. To this end, let us
2.6. RING HOMOMORPHISMS AND ISOMORPHISMS 69
invoke polynomial long division that is taught in high school (and which we
will revisit in Exercise 2.131 at the end of this chapter). So, suppose we are
given an arbitrary polynomial p(x) that is in ker(f). We wish to show that
p(x) is a multiple of x
2
2. Dividing p(x) by x
2
2 using long division,
we can write p(x) = q(x)(x
2
2) + r(x) for some quotient polynomial q(x)
and some remainder r(x) that is at most of degree 1. We wish to show
that r(x) is actually zero. Since r(x) is at most of degree 1, we may write
it as a + bx for some a and b in Q. Since f is a ring homomorphism,
f(p(x)) = f(q(x))f(x
2
2) +a +b
2) = 0 = q(
2) 0+a+b
2. Thus, we nd a+b
2 = 0. But we
have seen in Exercise 2.12.4 that this is impossible unless a = b = 0. Thus,
r(x) must be zero, thereby showing that ker(f) = x
2
2.
Question 2.99.1. After all, x goes to
2 under f, so why is
x
f(p(x) +x) =
f(q(x) +x) = q(0). Earlier, we had dened
f(p(x) +x)
to be p(0): are these denitions the same? In other words, is p(0) = q(0)?
The answer is yes! For, the fact that p(x) + x = q(x) + x means that
p(x) q(x) x (why?), or alternatively, p(x) q(x) is a multiple of x.
Hence, the constant term of p(x) q(x), which is p(0) q(0), must be zero,
i.e., p(0) = q(0). It follows that
f is indeed well-dened.
Now that we know
f is well-dened, it is easy to check that
f is a
ring homomorphism (do it!). What is the kernel of
f? It consists of all
equivalence classes p(x) +x such that the constant term p(0) is zero. But
to say that p(0) is zero is to say that p(x) is divisible by x (why?), or in
other words, that p(x) is already in x. Thus, the kernel of
f consists
of just the equivalence class xbut this is the zero element in the ring
R[x]/x. Thus, the kernel of
f is just the zero ideal, so by Lemma 2.102,
f is injective. Moreover,
f is clearly surjective, since every real number r
arises as the constant term of some polynomial in R[x] (for example, the
polynomial r + 0x + 0x
2
+ ).
The function
f quanties why R[x]/x and R are really equal to each
other. There are two ingredients to this: the function
f, being injective
and surjective, provides a one-to-one correspondence between R[x]/x and
72 CHAPTER 2. RINGS AND FIELDS
R as sets, and the fact that
f is a ring homomorphism tells us that the
addition and multiplication in R is essentially the same as that in R[x]/x.
Moreover, since
f has kernel zero, we do not even have to divide out by any
ideal in R[x]/x to realize this sameness of ring operations. Thus, R/x
and R are really the same rings, even though they look dierent. We say
that R[x]/x is isomorphic to R via the map
f.
Denition 2.105. Let f : R S be a ring homomorphism. If f is both
injective and surjective, then f is said to be an isomorphism between R and S.
Two rings R and S are said to be isomorphic (written R
= S) if there is some
function f : R S that is an isomorphism between R and S.
Let us look at some examples of ring isomorphisms:
Example 2.106. Let us revisit Example 2.38. Denote the function that
sends r R to diag(r) by f.
Exercise 2.106.1. Check that f is bijective as a function from R
to the subring of M
n
(R) consisting of matrices of the form diag(r).
Exercise 2.106.2. Also, check that f(r + s) = f(r) + f(s), and
f(rs) = f(r)f(s).
Moreover f(1) is clearly the identity matrix. Thus, the function f is
indeed a ring homomorphism from R to the subring of M
n
(R) consisting of
matrices of the form diag(r) that is both injective and surjective, or described
alternatively, f is an isomorphism between these two rings. Intuitively, these
two rings are the same, even though one appears as a set of ordinary
numbers, while the other appears in the form of special matrices.
Example 2.107. Dene a function
f from the quotient ring Q[x]/x
2
2
to Q[
2] by the rule
f(p(x) +x
2
2) = p(
2].
Exercise 2.107.1. Show that
f is well dened. (Hint: If p(x) +
x
2
2 = q(x) + x
2
2, then p(x) q(x) x
2
2, so
p(x)q(x) = g(x)(x
2
2) for some polynomial g(x) Q[x]. What
happens if you set x =
2 in this?)
2.6. RING HOMOMORPHISMS AND ISOMORPHISMS 73
Exercise 2.107.2. Show that
f is a ring homomorphism.
Exercise 2.107.3. Show that
f is surjective.
Exercise 2.107.4. Show that
f is injective. (Hint: Recall that we
have proved in Example 2.99 that p(
2].
Intuitively, the two rings are the same, even though one appears as a
quotient ring of polynomials, while the other appears as a subring of the
reals.
Example 2.108. The following examples show that well-known elds can
show up as subrings of matrices!
Exercise 2.108.1. Let S denote the subset of M
2
(Q) consisting
of all matrices of the form
_
a 2b
b a
_
where a and b are arbitrary rational numbers.
1. Show that S is a subring of M
2
(Q).
2. Prove that the map f : Q[
2] S that sends a + b
2 to
the matrix above is an isomorphism between Q[
2] and S.
Exercise 2.108.2. Let S denote the subset of M
2
(R) consisting
of all matrices of the form
_
a b
b a
_
where a and b are arbitrary real numbers.
1. Show that S is a subring of M
2
(R).
2. Prove that the map f : C S that sends a+b to the matrix
above is an isomorphism between C and S.
74 CHAPTER 2. RINGS AND FIELDS
The two examples in the exercises above are referred to as the regular
representation of Q[
2] in M
2
(Q) and C in M
2
(R) (respectively). More
generally, let K/F be any eld extension (see Denition 2.60). Then K can
be considered as a vector space over F (we will study this in Example 3.7
in Chapter 3 ahead). When the dimension of K over F is nite, say n, then
one can always nd a subring of M
n
(F) that is isomorphic to K: this is
considered in Exercise 3.105 in Chapter 3.
Example 2.109. It is not necessary that the rings R and S in the denition
of a ring isomorphism be dierent rings. A ring isomorphism f : R R is
to be thought of as a one-to-one onto map from R to R that preserves the
ring structure. (Such a map is also known as an automorphism of R.) Here
are some examples:
Exercise 2.109.1. Prove that the map f : Q[
2] Q[
2] that
sends a + b
2 is a ring
isomorphism. What are the elements of Q[
2] on which f acts as
the identity map?
Exercise 2.109.2. Let F be a eld, and let a be a nonzero element
of F. Let b be an arbitrary element of F. Prove that the map
f : F[x] F[x] that sends x to ax + b and more generally, a
polynomial p
0
+ p
1
x + + p
n
x
n
to the polynomial p
0
+ p
1
(ax +
b) + +p
n
(ax +b)
n
is an automorphism of F[x].
Exercise 2.109.3. Prove that the complex conjugation map f :
C C that sends a+b (given real number a and b) to the complex
number a b is an automorphism of C. Determine the set of
complex numbers on which f acts as the identity map.
We now come to a fundamental result that connects homomorphisms
and isomorphisms. To motivate this, compare Examples 2.96 and 2.104. In
the rst example, we dened a function f : R[x] R that sends p(x) to p(0)
and observed that it was a ring homomorphism whose image was all of R
and whose kernel was the ideal x, while in the second example, we dened
a function
f : R[x]/x R by
f(p(x) + x) = p(0), and observed that it
was well-dened and that it gave us an isomorphism between R[x]/x and
2.6. RING HOMOMORPHISMS AND ISOMORPHISMS 75
R. Observe the close connection between how the functions f and
f are
dened in the two examples, and observe that the ring R[x]/x is obtained
by modding R[x] by the kernel of f. Now as another instance, compare
Examples 2.99 and 2.107. Here too, in the rst example, we dened a
function f : Q[x] Q[
2] by
f(p(x) +x
2
2) = p(
f(r +ker(f)) =
f(s +ker(f)), i.e.,
f is well-dened.
Now let us go through the three ingredients in Denition 2.88 and check
that
f is a ring homomorphism. We have
f ((r +ker(f)) + (s +ker(f))) =
denote
the set of invertible elements of R. Prove that R
m = 0 (for a
and b in Q) if and only if a = b = 0. Show that Q[
m] is a eld.
Exercise 2.118. The following concerns the ring Q[
2,
3] of Example 2.34,
and is designed to show that if a, b, c, and d are rational numbers, then a +
b
2 +c
3 +d
3 , Q[
3 Q[
3 = x + y
2. Square
both sides and arrive at a contradiction. You will need to invoke a fact
about Q[
2 +c
3 +d
2) +
3(c + d
2) = 0.
Prove that c + d
3 =
a +b
2
c +d
2
. Why is this last equality a contradiction?)
78 CHAPTER 2. RINGS AND FIELDS
4. Conclude that this forces a = b = c = d = 0.
5. Observe that if a = b = c = d = 0 then a + b
2 + c
3 + d
6 = 0
trivially. This proves the assertion stated at the beginning.
Exercise 2.119. We will prove in this exercise that Q[
2,
3] is actually a
eld.
1. You know that if a and b are rational numbers, then (a+b
2) (ab
2)
is also rational. (Why?) Similarly, if c and d are rational numbers, then
(c +d
3) (c d
2 +c
3 +d
6) (a +b
2 c
3 d
6)
(a b
2 +c
3 d
6) (a b
2 c
3 +d
6)
is also rational. (This just involves multiplying out all the terms above
do it! However, you can save yourselves a lot of work by multiplying the
rst two terms together using the formula (x +y)(x y) = x
2
y
2
, and
then multiplying the remaining two terms together, and looking out for
patterns.)
2. Now show using part (1) above that Q[
2,
2 + c
3 + d
6 in Q[
2,
2 c
3 d
6), (a b
2 +
c
3 d
6) or (a b
2 c
3 +d
2
by x y
2 and
taking advantage of the fact that (x +y
2)(x y
2) is rational. What
ideas do you get from part (1) above?)
Exercise 2.120. Let R be an integral domain. Show that an element in R[x] is
invertible, if and only if it is the constant polynomial r(= r +0x+0x
2
+ ) for
some invertible element r R. In particular, if R is a eld, then a polynomial
in R[x] is invertible if and only if it is a nonzero element of R. (See the notes
on Page 88 for a discussion on polynomials with coecients from an arbitrary
ring.)
By contrast, show that the (nonconstant) polynomial 1 +[2]
4
x in the poly-
nomial ring Z/4Z[x] is invertible, by explicitly nding the inverse of 1 + [2]
4
x.
Repeat the exercise by nding the inverse of 1 + [2]
8
x in the polynomial ring
Z/8Z[x]. (Hint: Think in terms of the usual Binomial Series for 1/(1 +t) from
your Calculus courses. Do not worry about convergence issues. Instead, think
2.7. FURTHER EXERCISES 79
about what information would you glean from this series if, due to some miracle,
t
n
= 0 for some positive integer n?)
Exercise 2.121. We will revisit some familiar identities from high school in
the context of rings! Let R be a ring:
1. Show that a
2
b
2
= (a b)(a + b) for all a and b in R if and only if R
is commutative.
2. Show that (a +b)
2
= a
2
+ 2ab +b
2
for all a and b in R if and only if R
is commutative.
3. More generally, if R is a commutative ring, prove that the Binomial The-
orem holds in R: for all a and b in R and for all positive integers n,
(a+b)
n
=
_
n
0
_
a
n
+
_
n
1
_
a
n1
b+
_
n
2
_
a
n2
b
2
+ +
_
n
n 1
_
ab
n1
+
_
n
n
_
b
n
Exercise 2.122. An element a in a ring is said to be nilpotent if a
n
= 0 for
some positive integer n.
1. Show that if a is nilpotent, then 1a and 1+a are both invertible. (Hint:
Just as in Exercise 2.120 above, think in terms of the Binomial Series for
1/(1t) and 1/(1+t). Do not worry about convergence, but ask yourself
what you can learn from the series if t
n
= 0 for some positive integer n.)
2. Let R be a commutative ring. Show that the set of all nilpotent elements
in R forms an ideal in R. (Hint: Suppose that a
n
= 0 and b
m
= 0. What
can you say about (a + b)
n+m1
, given your knowledge of the Binomial
Theorem for commutative rings from Exercise 2.121 above?
Exercise 2.123. Let S denote the set of all functions f : R R. Given f
and g in S, dene two binary operations + and on S by the rules
(f +g)(x) = f(x) +g(x)
(f g)(x) = f(x)g(x)
(These are referred to, respectively, as the pointwise addition and multiplication
of functions.)
1. Convince yourselves that (S, +, ) is a ring. What is the 0 of S? What
is the 1 of S?
80 CHAPTER 2. RINGS AND FIELDS
2. Show that S is not an integral domain. (Hint: Play with functions like
f(x) = x +[x[ or g(x) = x [x[.)
3. More generally, show that every nonzero f S is either a unit or a
zero-divisor by showing:
(a) f is a unit if and only if f(x) ,= 0 for all x R.
(b) f is a zero-divisor if and only if f(x) = 0 for at least one x R.
4. Let s : R S be the function that sends the real number r to the function
s
r
dened by s
r
(x) = r for all x R. Show that s is an injective ring
homomorphism from R to S The image of s in R is therefore a subring of
R that is isomorphic to R. It is known as the set of constant functions.
Exercise 2.124. Let R be a ring.
Denition 2.124.1. The center of R, written Z(R), is dened
to be the set r R [ rx = xr for all x R.
1. Show that Z(R) is a subring of R.
2. If R is commutative, what is Z(R)?
3. Determine Z(M
2
(Z)). (Hint: Invoke the fact that a matrix in the center
must commute with the four matrices e
i,j
, where e
i,j
is as dened in
Exercise 2.16.4.)
Exercise 2.125. Let R be a ring.
1. If I and J are ideals of R, show that I J is an ideal of R. (Is I J an
ideal of R?)
2. If S and T are subrings of R, show that S T is a subring of R. (Is S T
a subring of R?)
3. If R is a eld, and if S and T are subelds of R, show that S T is a
subeld of R.
Exercise 2.126. Here is an example of a ring in which elements do not factor
uniquely into a product of primes! Consider the subring of C generated by Z and
5, namely, Z[
5] to Z as follows: N(a +b
5) = a
2
+5b
2
. (Notice that a
2
+5b
2
is just (a +b
5) (a b
5).)
2.7. FURTHER EXERCISES 81
1. Show that N is multiplicative, that is, N(xy) = N(x)N(y) for any two
elements x and y of Z[
5].
2. Show that if x in Z[
5], then
N(x) must be 1.
4. Use parts 2 and 3 above to show that if x is a unit in Z[
5], then x
can only be 1.
5. If R is a commutative ring, an irreducible in R is a nonzero element x
such that if x = bc for two elements b and c, then either b or c must be
a unit. (It turns out that this is the correct generalization of the concept
of primes that is needed to study unique factorization in arbitrary rings.)
Also, just as in Z, we say an element b in an arbitrary commutative ring
R divides an element a (or is a divisor of a) if there exists an element
c in R such that a = bc. Using part 4, show that if x is an irreducible
element in Z[
5].
10. Study the various factors of N(1 +
5) and of N(1
5) and show
that both 1 +
5 and 1
5 are irreducible.
11. Two irreducibles x and y in a commutative ring R are said to be associates
if x = yu for some unit u. Part 4 shows that in the ring Z[
5], two
elements x and y are associates if and only if x = y. Now use the fact
that every element in Z[
5 to show
that neither 2 nor 3 is an associate of either 1 +
5 or 1
5.
12. A commutative ring R is said to possess unique prime factorization if every
element a R that is not a unit factors into a product of irreducibles,
82 CHAPTER 2. RINGS AND FIELDS
and if a = x
1
x
2
x
s
and a = y
1
y
2
y
t
are two factorizations of a
into irreducibles, then s must equal t, and after relabeling if necessary,
each x
i
must be an associate of the corresponding y
i
. (Again, it turns
out that this is the correct generalization of the concept of unique prime
factorization in the integers to arbitrary commutative rings.) Prove that
Z[
(x) +r
(x), and r
(x) = 0 or deg(r
(x)) = r
(x)
r(x) ,= 0 then the degree of the right side must be less than the degree
of the left side, and hence conclude that r(x) = r
(x).
This establishes the uniqueness of q(x) and r(x).
2. Now for the existence of q(x) and r(x). First, let S
is nonempty.
84 CHAPTER 2. RINGS AND FIELDS
3. If S
contains 0, show that we have proved the existence of q(x) and r(x)
with the required properties.
4. So assume from now on that S
2,
2 +c
3 +d
6 to a b
2 +c
3 d
6.
2. The map that sends a +b
2 +c
3 +d
6 to a +b
2 c
3 d
6.
3. The map that sends a +b
2 +c
3 +d
6 to a b
2 c
3 +d
6.
(Of course, the identity map that sends a+b
2+c
3+d
6 to a+b
2+
c
3 +d
6 is also a ring isomorphism. It can be shown that these four are all
the ring isomorphism from Q[
2,
3] to itself.)
Notes
Remarks on Example 2.10 Every nonzero element in Q has a multiplicative
inverse, that is, given any q Q with q ,= 0, we can nd a rational number q
such
that qq
= 1. The same cannot be said for the integers: not every nonzero integer
has a multiplicative inverse within the integers. For example, there is no integer a
such that 2a = 1, so 2 does not have a multiplicative inverse.
Remarks on Example 2.12 The sum and product of any two elements a +
b
2 and c +d
2 of Q[
2] is
closed under addition and multiplication.) Now suppose you were trying to prove
that, say, addition in Q[
2],
(u +v) +w = u + (v +w). Notice that in addition to being in Q[
2], u, v, and w
are also real numbers. Since associativity holds in the reals, we nd upon viewing
u, v, and w as real numbers that (u + v) + w = u + (v + w). Now viewing u, v,
and w in this equation back again as elements of Q[
2] , we nd that associativity
holds in Q[
2 = 0 i a = 0
and b = 0, proceed as follows: If b is not zero, a+b
2 = 0 yields
2 = a/b. Since
a/b is a rational number, this contradicts Chapter 1, Exercise 1.42, so b must be
zero. But if b = 0, a +b
n
i=0
f
i
x
i
and g =
m
j=0
g
j
x
j
and study fg and gf.) If R is not commutative, R[x] will also
not be commutative. To see this last assertion, suppose a and b in R are such that
ab ,= ba. Then viewing a and b as constant polynomials in R[x], we nd that we
get two dierent products of the polynomials a and b depending on the order in
which we multiply them!
Here is something strange that can happen with polynomials with coecients
in an arbitrary ring R. First, the degree and highest coecient of polynomials in
R[x] (where R is arbitrary) are dened exactly as for polynomials with coecients
in the reals. Now over R[x], if f(x) and g(x) are two nonzero polynomials, then
2.7. FURTHER EXERCISES 89
deg(f(x)g(x)) = deg(f(x)) + deg(g(x). But for an arbitrary ring R, the degree of
f(x)g(x) can be less than deg(f(x)) + deg(g(x))!
To see why this is, suppose f(x) = f
n
x
n
+ lower-degree terms (with f
n
,= 0),
and suppose g(x) = g
m
x
m
+lower-degree terms (with g
m
,= 0). On multiplying out
f(x) and g(x), the highest power of x that will show up in the product is x
n+m
,
and its coecient will be f
n
g
m
. If we are working in R, then f
n
,= 0 and g
m
,= 0
will force f
n
g
m
to be nonzero, so the degree of f(x)g(x) will be exactly n+m. But
over arbitrary rings, it is quite possible for f
n
g
m
to be zero even though f
n
and g
m
are themselves nonzero. (You have already seen examples of this in matrix rings.
Elements a and b in a ring R such that a ,= 0 and b ,= 0 but ab = 0 will be referred
to later in the chapter as zero-divisors.) When this happens, the highest nonzero
term in f(x)g(x) will be something lower than the x
n+m
term, so the degree of
f(x)g(x) will be less than n +m!
Clearly, this phenomenon will not occur if the coecient ring R does not have
any zero-divisors. As will be explained further along in the chapter, elds do
not have any zero-divisors (i.e., they are integral domains.) Hence if F is a eld
and f(x) and g(x) are two nonzero polynomials in F[x], then deg(f(x)g(x)) =
deg(f(x)) + deg(g(x)). (In particular, this shows that if F is any eld, F[x] also
does not have zero-divisorswhy?)
Remarks on Example 2.22 The additive identity is (0, 0) and the multi-
plicative identity is (1, 1). What is the product of (1, 0) and (0, 1)? Of (2, 0) and
(0, 2)?
Remarks on some properties of rings deducible from the axioms
Here is a hint for some of these properties listed in Remark 2.24:
1. Uniqueness of additive identity: Suppose 0 and 0
2.) In general, S a will not be a subring of R, since this new set may not
be closed under addition and multiplication. (In our example, the square of 1+
2,
which is 3 +2
2, is not in Q 1 +
2,
which is 3 +
2 is not in Q1 +
0
+ s
1
a + s
2
a
2
+ + s
m
a
m
are equal (as elements of R),
can you conclude that n = m and s
i
= s
i
for i = 0, . . . , n? (Hint: See the examples
below.)
Now let us consider some examples:
Example 2.141. What, according to our denition above, is the subring of the
reals generated by Q and
2
with coecients in Q, that is, the set of all expressions of the form q
0
+ q
1
2 +
q
2
(
2)
2
+ +q
n
(
2)
n
. Now let us look at these expressions more closely. Since
(
2)
2
= 2, q
2
(
2)
2
is just 2q
2
, q
4
(
2)
4
is just 4q
4
, etc. Similarly, q
3
(
2)
3
is just
2q
3
2, q
5
(
2)
5
is just 4q
5
2 for
suitable rational numbers a and b. (For example, 1+2
2+(1/2)(
2)
2
+(1/4)(
2)
3
can be rewritten as 2 +(5/2)
2. It is for this
reason that we denoted this ring Q[
2] generated by Z and
2 is the set
of all real numbers of the form a +b
2] in Example 2.31.
Example 2.143. Using the fact that i
2
= 1, show that the subring of C generated
by Q and i is the set of all complex numbers of the form a +bi, where a and b are
rational numbers. This explains the notation Q[] for the ring in Example 2.14.
2.7. FURTHER EXERCISES 93
Example 2.144. Similarly, the subring of Q[] generated by Z and i is is the set
of all complex numbers of the form a + bi, where a and b are integers. Hence the
notation Z[] in Example 2.32.
Example 2.145. Show that the subring of Q generated by Z and 1/2 is the set
of all rational numbers that have the property that, when written in the reduced
form a/b with gcd(a, b) = 1, the denominator b is a power of 2. This explains the
notation Z[1/2] in Example 2.33.
Example 2.146. Prove that the subring of R generated by Q[
2] and
3 is
precisely the ring of Example 2.34. Thus, this ring should be denoted Q[
2][
3].
We will often avoid using the second pair of brackets and simply refer to this ring
as Q[
2,
3].
Here is a quick exercise: In Lemma 2.140, suppose a is actually in S. Can you
prove that the ring generated by S and a is just S?
Remarks on Denition 2.46 Most textbooks dene a eld to be a commuta-
tive ring in which every nonzero a is invertible. In other words, the extra condition
that we have imposed, namely that the ring in question rst be an integral domain,
is omitted by most textbooks. This is because this extra condition is not really
requiredone can show easily that any commutative ring in which every nonzero
element a is invertible must necessarily be an integral domain. (If there were to ex-
ist a pair of nonzero elements a and b such that ab = 0, then multiplying both sides
by a
1
, which exists by hypothesis, we would nd b = 0, a contradiction. Hence
there can be no pair of nonzero elements that multiply out to zero.) The reason we
have chosen to dene a eld as an integral domain in which every nonzero element
is invertible is to highlight the hierarchical nature of the objects that we have been
considering: rings are fairly general objects, commutative rings are special rings
that are nicer to deal with, integral domains are special commutative rings that are
even nicer, and nally, elds are special integral domains that are nicest of all!
94 CHAPTER 2. RINGS AND FIELDS
Chapter 3
Vector Spaces
3.1 Vector Spaces: Denition and Examples
Recall from elementary linear algebra the notation R
2
for 2-dimensional xy
space and R
3
for 3-dimensional xyz space. A vector in R
2
(respectively
R
3
) is an arrow with its base at the origin and its tip at some point in R
2
(respectively R
3
). If v and w are vectors, then we add v and w using the
parallelogram law. We know that this process of addition is commutative,
that is, v + w = w + v for all vectors v and w. Vector addition is also
associative, that is, v +(w+u) = (v +w)+u for all vectors v, w, and u. The
vector whose base and tip are at the origin is denoted 0 (suggestively), and
satises v +0 = 0+v for all vectors v. Finally, for every vector v, the vector
we get by inverting v about the origin is denoted v (also suggestively), and
satises v + (v) = (v) +v = 0.
Focusing just on R
2
for convenience, let us stop thinking of R
2
as a
geometric object. Instead, since every point of R
2
corresponds to a vector
whose tip is at the given point, let us consider R
2
as a set consisting of
abstract objects called vectors. This set has a binary operation dened on
itaddition, where v + w is dened as the vector we get by temporarily
reverting to the geometric interpretation of R
2
as a plane and considering
95
96 CHAPTER 3. VECTOR SPACES
the vector obtained as the diagonal of the parallelogram formed by v and
w. What do you notice about this set of vectors with this binary operation?
The binary operation satises all the axioms for an abelian group! Thus,
in addition to being a geometric object (the plane), R
2
, when considered as
a set with a binary operation, has an algebraic structureit is an abelian
group!
But there is more. Let us go back to the interpretation of R
2
as 2-
dimensional xy space, and let us recall the notion of scalar multiplication.
A scalar is any real number, and given a scalar r and a vector v, we multiply
r and v according to the following denitionif r 0, then r v is the vector
in the same direction as v but whose length is r times the length of v, and if
r < 0, then r v is the vector in the opposite direction as v but whose length
is [r[ times the length of v. What are the properties of scalar multiplication?
If r and s are any two scalars, and if v and w are any two vectors, we have
the following: r (v+w) = r v+r w, (r+s) v = r v+s v, (rs) v = r (s v),
and 1 v = v.
Observe that the set of scalars, namely the real numbers, is a eld. Now,
let us attempt to generalize all this. In the case of R
2
above, we have seen
that the geometric interpretation of R
2
as 2-dimensional xy space furnishes
us with the notion of vector addition and scalar multiplication, but once
these denitions have been furnished, R
2
seems to have an algebraic life of
its own. For instance, (R
2
, +) is an abelian group, while scalar multiplication
has the (algebraic) properties listed above. Could similar sets of objects
called vectors and scalars not arise in dierent circumstances, with the same
properties as the ones listed above, but with the vector addition and scalar
multiplication perhaps dened by some process other than a geometric one?
The answer is yes, and in fact, they arise in vastly dierent situations. As
with the other concepts that we have seen (groups, rings, elds, etc.), it is
worth isolating this phenomenon and studying it in its own right.
Denition 3.1. Let F be a eld. A vector space over F (also called an F
vector space) is an abelian group V together with a function F V V called
3.1. VECTOR SPACES: DEFINITION AND EXAMPLES 97
scalar multiplication and denoted such that for all r and s in F and v and w
in V ,
1. r (v +w) = r v +r w,
2. (r +s) v = r v +s v,
3. (rs) v = r (s v), and
4. 1 v = v.
The elements of V are called vectors and the elements of F are called scalars.
Thus, R
2
and R
3
are both vector spaces over R. Let us look at several ex-
amples of vector spaces that arise from other than geometric considerations:
Example 3.2. We have looked at R
2
and R
3
, why not generalize these,
and consider R
4
, R
5
, etc.? These would of course correspond to higher-
dimensional worlds. It is certainly hard to visualize such spaces, but there
is no problem considering them in a purely algebraic manner. Recall that
every vector in R
2
can be described uniquely by the pair (a, b), consisting
of the x and y components of the vector. (Uniquely means that the
vector (a, b) equals the vector (a
, b
) if and only if a = a
and b = b
.)
Similarly, every vector in R
3
can be described uniquely by the triple (a, b, c),
consisting of the x, y, and z components of the vector. Thus, R
2
and R
3
can be described respectively as the set of all pairs (a, b) and the set of all
triples (a, b, c), where a, b, and c are arbitrary real numbers. Proceeding
analogously, for any positive integer n, we will let R
n
denote the set of n-
tuples (a
1
, a
2
, . . . , a
n
), where the a
i
are arbitrary real numbers. (As with
R
2
and R
3
, the understanding here is that two n-tuples (a
1
, a
2
, . . . , a
n
) and
(a
1
, a
2
, . . . , a
n
) are equal if and only if their respective components are equal,
that is, a
1
= a
1
, a
2
= a
2
, . . . , and a
n
= a
n
.) These n-tuples will be our
vectors; how should we add them? Recall that in R
2
we add the vectors
(a, b) and (a
, b
) by adding a and a
, b
) is (a +a
, b +b
).
We will do the same with R
n
we will decree that (a
1
, a
2
, . . . , a
n
) +
(a
1
, a
2
, . . . , a
n
) = (a
1
+a
1
, a
2
+a
2
, . . . , a
n
+a
n
).
Exercise 3.2.2. Check that with this denition of addition,
(R
n
, +) is an abelian group.
What should our scalars be? Just as in R
2
and R
3
, let us take our scalars
to be the eld R. How about scalar multiplication? In R
2
, the product
of the scalar r and the vector (a, b) is (ra, rb), that is, we multiply each
component of the vector (a, b) by the real number r. (Is that so? Check!)
We will multiply scalars and vectors in R
n
in the same way: we will decree
that the product of the real number r and the n-tuple (a
1
, a
2
, . . . , a
n
) is
(ra
1
, ra
2
, . . . , ra
n
).
Exercise 3.2.3. Check that this denition satises the axioms of
scalar multiplication in Denition 3.1.
Thus, R
n
is a vector space over R.
Example 3.3. Now, why restrict the examples above to n-tuples of R?
For any eld F, let F
n
stand for the set of n-tuples (a
1
, a
2
, . . . , a
n
), where
the a
i
are arbitrary elements of F. Add two such n-tuples componentwise,
that is, dene addition via the rule (a
1
, a
2
, . . . , a
n
) +(a
1
, a
2
, . . . , a
n
) = (a
1
+
a
1
, a
2
+ a
2
, . . . , a
n
+ a
n
). Take the eld F to be the eld of scalars, and
dene scalar multiplication just as in R
n
: given an arbitrary f F, and
an arbitrary n-tuple (a
1
, a
2
, . . . , a
n
), dene their scalar product to be the
n-tuple (fa
1
, fa
2
, . . . , fa
n
).
Exercise 3.3.1. Check that these denitions of vector addition
and scalar multiplication make F
n
a vector space over F.
Taking F = C and n = 2 for instance, we get complex 2-space, which,
for example, is a natural arena in which to study plane curves.
3.1. VECTOR SPACES: DEFINITION AND EXAMPLES 99
Example 3.4. Similarly, for any eld F, let
0
F denote the set of all
innite-tuples (a
0
, a
1
, a
2
, . . . ), where the a
i
are in F. (It is convenient in
certain applications to index the components from 0 rather than 1, but if
this bothers you, it is harmless to think of the tuples as (a
1
, a
2
, a
3
, . . . ).)
Addition and scalar multiplication are dened just as in F
n
, except that
we now have innitely many components. With these denitions,
0
F
becomes an Fvector space. (This example is known as the direct product
of (countably innite) copies of F.)
Example 3.5. Consider the ring M
n
(R). Focusing just on the addition op-
eration on M
n
(R), recall that (M
n
(R), +) is an abelian group. (Remember,
for any ring R, (R, +) is always an abelian group.) We will treat the reals
as scalars. Given any real number r and any matrix (a
i,j
) in M
n
(R), we will
dene their product to be the matrix (ra
i,j
). (See the notes on page 153 for
a comment on this product.) Verify that with this denition, M
n
(R) is a
vector space over R. In a similar manner, if F is any eld, M
n
(F) will be a
vector space over F.
Example 3.6. Consider the eld Q[
2], +) is an abelian
group (why?). Think of the rationals as scalars. There is a very natural way
of multiplying a rational number q with an element a+b
2 of Q[
2], namely,
q (a+b
2) = qa+qb
2], so every
rational number is also an element of Q[
2], or
in other words, we want to think of q as a vector. However, when we see q
100 CHAPTER 3. VECTOR SPACES
in an expression like q(a + b
2] is a eld, so (Q[
2 Q[
2].
These two facts together gave us a Qvector space structure on Q[
2]. Now
let K/F be any eld extension. Since K is a eld, (K, +) is an abelian
group. Next, let us consider multiplication. Given any two elements k and
l of K, we know we can multiply the two elements together. However,
let us ignore this fact temporarily, and just consider the fact that given
any element f of F and any element k of K, we can multiply f and k.
(Notice that we have restricted the rst element to be from F. However,
we have placed no restriction on the second element, it can be any element
of K. This is just like considering the multiplication of any q Q and any
a+b
2 Q[
i=0
a
i
x
i
(where the a
i
are real numbers and n is some nonnegative
integer) is the polynomial
n
i=0
ra
i
x
i
. The real numbers have a dual role here:
when we see a real number r by itself, we want to think of it as a vector,
and when we see it in an expression r f, we want to think of it as a scalar
multiplying the vector f.
In the same vein, F[x] is an Fvector space for any eld F.
Example 3.9. Here is an example related to F[x]. For any eld F and any
nonnegative integer n, write F
n
[x] for the set of all polynomials in x with
coecients in F whose degrees are at most n. Then F
n
[x] is an Fvector
space.
Question 3.9.1. Why?
102 CHAPTER 3. VECTOR SPACES
Example 3.10. Now think about this: Suppose V is a vector space over a
eld K. Suppose F is a subeld of K. Then V is also a vector space over
F!
Question 3.10.1. Why? What do you think the scalar multiplica-
tion ought to be? (See the notes on page 153 for some remarks on
this.)
As an example of this phenomenon, R[x], besides being an Rvector
space, is also a Qvector space. Vector addition is the usual addition of poly-
nomials. As for scalar multiplication, when we consider R[x] as a Qvector
space, we only allow multiplication of polynomials by rational numberswe
ignore the fact that we can multiply polynomials by arbitrary real numbers.
Similarly, M
2
(Q[
2i3j =
i=1
f
i
v
i
for some integer n 1, some choice of vectors v
1
,
. . . , v
n
from S, and some choice of scalars f
1
, . . . , f
n
. (In the language of
Denition 3.13 above, S is a spanning set for V if every vector in V is expressible
as a linear combination of some elements of S.
The discussion before Denition 3.13 showed that the set S = i, j is a
spanning set for R
2
. Here are more examples:
Example 3.15. We have seen in Example 3.6 that Q[
2] is a Qvector
space. Note that every element of Q[
2] is of the form a +b
2 for suitable
a and b Q. Thinking of a as a 1, this tells us that every element
of Q[
2. (We are
thinking of 1 as a vector in this last statement. Recall the discussion of the
dual role of Q in Example 3.6.) Hence, S = 1,
2].
Example 3.16. The set 1, x, x
2
, . . . is a spanning set for the polynomial
ring R[x] considered as a vector space over R (see Example 3.8 above). This
is clear since every polynomial in R[x] is of the form r
0
+r
1
x+ +r
n
x
n
for
some integer n 0 and suitable real numbers r
0
, r
1
, . . . , r
n
. Put dierently,
every polynomial can be expressed as a R-linear combination of 1, x, . . . , x
n
for some integer n 0. Since dierent polynomials have dierent degrees,
we need to use all powers x
i
(i = 1, 2, . . . ) to get a spanning set for R[x].
Remark 3.17. By convention, the empty set is taken as a spanning set for
the zero vector space. Moreover, by convention, the trivial space is the only
space spanned by the empty set. This convention will be useful later, when
dening the dimension of the zero vector space.
So, returning to our study of dimension, should we take the algebraic
analog of coordinate axes to be any set S of vectors that span V ? No, not
106 CHAPTER 3. VECTOR SPACES
yet! There could be redundancy in this set! It may turn out, for example,
that the smaller set S v obtained by deleting a particular vector v from
the set already spans V ! (If so, why bother using this vector v as one of
coordinate axes?!?)
Let us formulate this as a denition:
Denition 3.18. Given a vector space V over a eld F, a vector v in a
spanning set S is said to be redundant if the subset S v obtained by
removing v is itself a spanning set for V . (Put dierently, v is redundant in
S if every vector in V can already be expressed as a linear combination of
elements in S v, so the vector v is not needed at all.) We will say that
there is redundancy in the spanning set S if any one of the vectors in this set is
redundant.
Example 3.19. For an example of a spanning set with redundancy in it,
we do not have to look very far: Going back to R
2
, let us write w for the
vector with tip at (1/
2, 1/
2) i + (b 1/
2) j +w?
Since i and j already span R
2
, there is clearly redundancy in the set
i, j, w.
To push this example a bit further, note that i and w also form a span-
ning set for R
2
. To see this, note that j = i +
= S 1 + x. Any nite
subset of S
2, 1/
2 forms a basis
for Q[
2 span Q[
2,
you were asked to prove this in Exercise 2.12.4 in Chapter 2!)
Example 3.29. The set 1, x, x
2
, . . . forms a basis for R[x] as a vector
space over R. We have seen in Example 3.16 that this set spans R[x]. As
for the linear independence, see the argument in Example 3.23 above.
3.2. LINEAR INDEPENDENCE, BASES, DIMENSION 113
Exercise 3.29.1. Prove that the set B = 1, 1+x, 1+x+x
2
, 1+
x +x
2
+x
3
. . . is also a basis for R[x] as a vector space over R.
(Hint: Writing v
0
= 1, v
1
= 1 + x, v
2
= 1 + x + x
2
, etc., note
that for i = 1, 2, . . . , x
i
= v
i
v
i1
. It follows that all powers
of x (including x
0
) are expressible as linear combinations of the v
i
.
Why does it follow from this that the v
i
span R[x]? As for linear
independence, suppose that for some nite collection v
i
1
, . . . , v
i
k
(with i
1
< i
2
< < i
k
), there exist scalars r
1
, . . . , r
k
such that
r
1
v
i
1
+ + r
k
v
i
k
= 0. What is the highest power of x in this
expression? In how many of the elements v
i
1
, . . . , v
i
k
does it show
up? What is its coecient? So?)
Example 3.30. Consider F
n
[x] as an Fvector space (see Example 3.9
above). You should easily be able to describe a basis for this space and
prove that your candidate is indeed a basis.
Example 3.31. The set 1,
2,
3,
2,
3] as
a vector space over Q. You have seen in Example 2.34 that, by our very
denition of the ring, every element of Q[
2,
3] is of the form a + b
2 +
c
3 + d
2,
3,
6 spans Q[
2,
2 can be
rewritten as (a b) +b(1 +
2). So?)
114 CHAPTER 3. VECTOR SPACES
Exercise 3.33.2. Now show that if V is any vector space over any
eld with basis v
1
, v
2
, then the vectors v
1
, v
1
+ v
2
also form a
basis. How would you generalize this pattern to a vector space that
has a basis consisting of n elements v
1
, v
2
, . . . , v
n
? Prove that
your candidate forms a basis.
Exercise 3.33.3. Let V be a vector space with basis v
1
, . . . , v
n
.
Study Exercise 3.27.1 and come up with a linear combination of the
v
i
, similar to that exhibited in that exercise, that also forms a basis
for V . Prove that your candidate forms a basis.
Example 3.34. Consider the vector space
0
F of Example 3.4 above.
You may nd it hard to describe explicitly a basis for this space. However,
let e
i
(for i = 0, 1, . . . ) be the innite-tuple with 1 in the position indexed
by i and zeros elsewhere. (Thus, e
0
= (1, 0, 0, . . . ), e
1
= (0, 1, 0, . . . ), etc.)
Exercise 3.34.1. Why is the set S = e
0
, e
1
, e
2
, . . . not a basis
for
0
F? Is S at least linearly independent? (See the notes on
page 154 for some comments on this example.)
Example 3.35. The empty set is a basis for the trivial vector space. This
follows from Remark 3.17 (see also Remark 3.24), since the empty set spans
the trivial space, and since the empty set is vacuously linearly independent.
Here is a result that describes a useful property of bases and is very easy
to prove.
Proposition 3.36. Let V be a vector space over a eld F, and let S be a
basis. Then in any expression of a vector v V as v = f
1
b
1
+ +f
n
b
n
for
suitable vectors b
i
S and nonzero scalars f
i
, the b
i
and the f
i
are uniquely
determined.
Proof. What we need to show is that if v is expressible as f
1
b
1
+ +f
n
b
n
for suitable vectors b
i
S and nonzero scalars f
i
, and is also expressible
as g
1
c
1
+ + g
m
c
m
for suitable vectors c
i
S and nonzero scalars g
i
,
then n = m, and after relabelling if necessary, each b
i
= c
i
and each f
i
= g
i
(i = 1, . . . , n). To do this, assume, after relabelling if necessary, that b
1
= c
1
,
3.2. LINEAR INDEPENDENCE, BASES, DIMENSION 115
. . . , b
t
= c
t
(for some t min(m, n)), and that the sets b
t+1
, . . . , b
n
and
c
t+1
, . . . , c
m
are disjoint. Then, bringing all terms to one side, we may
rewrite our equality as
(f
1
g
1
)b
1
+ + (f
t
g
t
)b
t
+ f
t+1
b
t+1
+ +f
n
b
n
g
t+1
b
t+1
g
m
b
m
= 0
By the linear independence of the subset b
1
, . . . , b
t
, b
t+1
, , b
n
, c
t+1
, , c
m
of S, we nd that f
1
= g
1
, . . . , f
t
= g
t
, f
t+1
= = f
n
= 0, g
t+1
= =
g
m
= 0. But since the scalars were assumed to be nonzero, f
t+1
= 0
and g
t+1
= 0 are impossible, so, to begin with, there must have been no
f
t+1
or g
t+1
to speak of! Thus, t must have equaled n, and similarly,
t must have equaled m. From this, we get n = m (= t), and then, by our
very denition of t, we nd that b
1
= c
1
, . . . , b
n
= c
n
. Coupled with our
derivation that f
1
= g
1
, . . . , f
t
= g
t
, we have our desired result. 2
Now that we have arrived at the algebraic analog of coordinate axes, we
turn our attention to the next step in our programwe need to show that
every vector space has a basis, and that dierent bases of the same vector
space have the same number of elements in them.
The rst of these two tasks, namely, showing that every vector space has
a basis, is a little tricky to do: to do full justice to this task, we need to
invoke Zorns Lemma, an extremely useful tool of logic. (Zorns Lemma, in
spite of its name, is really not a lemma, but an axiom of logic. See Chapter
B in the Appendix.) For a rst introduction to abstract algebra, any usage
of Zorns Lemma can seem dense and somewhat foreboding (what else will
the Gods of Logic hurl at us?), so we will relegate the full proof to the same
Chapter B in the Appendix (see Theorem B.7 there). However, to help build
a more concrete feel for the existence of bases, we will also give a proof of
the existence of a basis in the special case when we know that the vector
space in question has a nite spanning set.
116 CHAPTER 3. VECTOR SPACES
We will assume that our vector space is not the trivial space, since we
already know that the trivial space has a basis (see Example 3.35 above).
Proposition 3.37. Let V be a vector space over a eld F. Let S be a
spanning set for V , and assume that S is a nite set. Then some subset of
S is a basis of V . In particular, every vector space with a nite spanning
set has a basis.
Proof. Note that S is nonempty, since V has been assumed to not be the
trivial space (see Remark 3.17). If the zero vector appears in S, then the
set S
= S 0 that we get by throwing out the zero vector will still span
V (why?) and will still be nite. Any subset of S
of S that
forms a basis of V . See the notes on page 223 Chapter B in the Appendix.
Having proved that every vector space has a basis, we now need to show
that dierent bases of a vector space have the same number of elements in
them. (Remember our original program. We wish to measure the size of
a vector space, and based on our examples of R
2
and R
3
, we think that a
good measure of the size would be the number of coordinate axes, or basis
elements, that a vector space has. However, for this to make sense, we need
to be guaranteed that every vector space has a basiswe just convinced
ourselves of thisand that dierent bases of a vector space have the same
number of elements in them.) In preparation, we will prove an important
lemma. Our desired results will fall out as corollaries.
We continue to assume that our vector space is not the trivial space.
Lemma 3.39 (Exchange Lemma). Let V be a vector space over a eld
F, and let B = v
1
, . . . , v
n
(n 1) be a spanning set for V . Let C =
w
1
, . . . , w
m
be a linearly independent set. Then m n.
Proof. The basic idea behind the proof is to replace vectors in the spanning
set B one after another with vectors in C, and observing at the end that
if m were greater than n, then there would not be enough replacements of
elements of B to guarantee linear independence of the set C.
We begin as follows: Since B spans V , every vector in V is expressible as
a linear combination of elements of B. In particular, we may write w
1
as a
linear combination of elements of B, that is, w
1
= c
1
v
1
++c
2
v
2
+ +c
n
v
n
for
suitable scalars c
i
, not all zero. Since one of these scalars is nonzero, we may
assume for convenience (by relabelling the vectors of B if necessary), that
c
1
,= 0. As usual, we may write v
1
= (1/c
1
)w
1
+(c
2
/c
1
)v
2
+(c
3
/c
1
)v
3
+
118 CHAPTER 3. VECTOR SPACES
+ (c
n
/c
1
)v
n
. Now go back and study how we proved (2) (3) in
Lemma 3.21. We are going to use the same sort of an argument here: we
will prove that the set w
1
, v
2
, v
3
, . . . , v
n
spans V . For given any vector v in
V , it can be written as a linear combination v = f
1
v
1
+f
2
v
2
+ +f
n
v
n
for
suitable scalars f
i
(why?). Now, in this expression, substitute (1/c
1
)w
1
+
(c
2
/c
1
)v
2
+(c
3
/c
1
)v
3
+ +(c
n
/c
1
)v
n
for v
1
, and what do you nd?v
is expressible as a linear combination of w
1
, v
2
, v
3
, . . . , v
n
! Thus, the set
w
1
, v
2
, v
3
, . . . , v
n
spans V as claimed.
Now observe what we have done: we have replaced v
1
with w
1
. Let us
take this to the next step. Since the set w
1
, v
2
, v
3
, . . . , v
n
spans V , we can
write w
2
as a linear combination of elements of this set. Thus, w
2
= g
1
w
1
+
g
2
v
2
+g
3
v
3
+ +g
n
v
n
for suitable scalars g
i
, not all zero. Now the scalars
g
2
, g
3
, . . . , g
n
cannot all be zero, since g
1
would then have to be nonzero
(why?) and this relation would then read w
2
= g
1
w
1
a contradiction, as
the set C is linearly independent. Hence, one of the scalars g
2
, g
3
, . . . ,
g
n
must be nonzero. For convenience, we may assume (by relabelling the
vectors v
2
, v
3
, . . . , v
n
if necessary) that g
2
,= 0. Dividing by g
2
and moving
all terms but v
2
to one side, we can write v
2
as a linear combination of the
vectors w
1
, w
2
, v
3
, . . . , v
n
. Exactly as in the last paragraph, we nd that
since the set w
1
, v
2
, v
3
, . . . , v
n
spans V , the set w
1
, w
2
, v
3
, . . . , v
n
also
spans V .
So far, we have succeeded in replacing v
1
with w
1
and v
2
with w
2
, and the
resultant set w
1
, w
2
, v
3
, . . . , v
n
still spans V . Now continue this process,
and consider what would happen if we were to assume that m is greater
than n. Well, we would replace v
3
by w
3
, v
4
by w
4
, etc., and then v
n
by w
n
.
(We know that we would be able to replace all the vs with ws because by
assumption, there are more ws than vs.) At each stage of the replacement,
we would be left with a set that spans V . In particular, the set we would be
left with after replacing v
n
by w
n
, namely w
1
, w
2
, . . . , w
n
, would span V .
But since we assumed that m is greater than n, there would be at least one
3.2. LINEAR INDEPENDENCE, BASES, DIMENSION 119
w left, namely w
n+1
. Since w
1
, w
2
, . . . , w
n
would span V , we would be
able to write w
n+1
as a linear combination of the vectors w
1
, w
2
, . . . , w
n
.
This is a contradiction, since the set C is linearly independent! Hence m
cannot be greater than n, that is, m n! 2
We are now ready to prove that dierent bases of a given vector space
have the same number of elements. We will distinguish between two cases:
vector spaces having bases with nitely many elements, and those having
bases with innitely many elements. We will take care of the innite case
rst.
Corollary 3.40. If a vector space V has one basis with an innite number
of elements, then every other basis of the vector space also has an innite
number of elements.
Proof. Let S be the basis of V with an innite number of elements (that
exists by hypothesis), and let T be any other basis. Assume that T has only
nitely many elements, say m. Since S has innitely many elements, we can
certainly pick m+1 vectors from it. So pick any m+1 vectors from S and
denote this selected set of vectors by S
are part of
the basis S, they are certainly linearly independent. We may think of the
set T as the set B of Lemma 3.39 (after all, T being a basis, will span V ),
and we may think of the set S
2,
and let B be any basis. By the very denition of dimension, B must have
n elements. Now apply Lemma 3.39 to the sets B and Cwe nd that
n + 1 n, which is a contradiction. Hence every subset of V consisting of
more than n elements must be linearly dependent, or, what is the same, any
linearly independent subset of V must have at most n elements. 2
Similarly, with the denition of dimension under our belt, the following
is an easy corollary of Proposition 3.37:
Corollary 3.48. Let V be an n-dimensional vector space. Then any span-
ning set for V has at least n elements.
Proof. Let S be a spanning set, and assume that [S[ = t < n. By Proposition
3.37, some subset of S is a basis of V . Since this subset can have at most t
122 CHAPTER 3. VECTOR SPACES
elements, it follows that the dimension of V , which is the size of this basis,
is at most t. This contradicts the fact that the dimension of V is n. 2
Putting together Corollary 3.48 and Proposition 3.37, we nd that if V
is an n-dimensional vector space, then any spanning set for V must have at
least n elements, and this set can then be shrunk to a basis of V (consisting of
exactly n elements). There is a corresponding result for linearly independent
elements in V . Corollary 3.47 shows that any linearly independent subset
of V must have at most n elements. What we will see in Proposition 3.49
below is that any linearly independent subset of V can be expanded to a
basis of V (which will then have exactly n elements).
Proposition 3.49 below holds even when V is not assumed to be nite-
dimensional, but a full proof requires the use of Zorns Lemma. The proof
of the general case is sketched in the remarks on page 224 in Chapter B in
the Appendix.
Proposition 3.49. Let V be a nite-dimensional vector space, and let C
be a linearly indepenent set. Then C can be expanded to a basis of V , i.e.,
there exists a basis B of V such that C B.
Proof. Let n be the dimension of V . Then by Corollary 3.47 C has at
most n elements in it. Assume that C = v
1
, v
2
, . . . , v
t
for some t n.
If C already spans V , then C would be a basis and we would be done.
(And if this happens, you know that t must equal n by Corollary 3.41!) So
assume that C does not span V . By the very denition of what it means
to span a vector space, there must be a vector in V , call it v
t+1
, that is not
expressible as a linear combination of the elements in C. We claim that the
set C
1
= v
1
, v
2
, . . . , v
t
, v
t+1
must be linearly independent. For suppose
f
1
v
1
+ + f
t
v
t
+ f
t+1
v
t+1
= 0 for some scalars f
i
, not all of which are
zero. Then f
t+1
cannot be zero, since otherwise our relation would read
f
1
v
1
+ +f
t
v
t
= 0 for nonzero scalars f
i
, and this would violate the linear
independence of C. Therefore, we may divide our original relation by f
t+1
3.2. LINEAR INDEPENDENCE, BASES, DIMENSION 123
to nd v
t+1
= (f
1
/f
t+1
)v
1
+ +(f
t
/f
t+1
)v
t
, contradicting the fact that
v
t+1
is not expressible as a linear combination of elements of C. Thus, C
1
is indeed linearly independent as claimed.
Note that the set C
1
has t + 1 elements. If C
1
spans V , then C
1
would
be a basis of V containing C, and we would be done. Otherwise, we could
expand C
1
to a linearly independent set C
2
and repeat our arguments . . . .
Notice that in the process above, we start with our set C with t elements,
and at each stage, we come up with a set that has one more element than the
set at the previous stage. When we reach a set with exactly n elements, this
set must span V , for if not, the set we would get at the next stage would
contain n + 1 elements and would be linearly independent, contradicting
Corollary 3.47 above. This set with exactly n elements would therefore be
a basis of V containing C. 2
Example 3.50. For example, in R
2
, consider the linear independent set
i. The contention of the theorem above is that one can adjoin one other
vector to this to get a basis for R
2
: for instance the set i, j is a basis, and
so, for that matter, is the set i, w. (Here, just as earlier in the chapter,
i = (1, 0), j = (0, 1), and w = (1/
2, 1/
2).)
We end this section with two more easy results concerning spanning sets
and linearly independent sets: the proofs simply consist of combining earlier
results!
Proposition 3.51. Let V be an n-dimensional vector space and S a subset
of V . Then:
1. If S is a spanning set for V (so [S[ n by Corollary 3.48), and if
moreover [S[ = n, then S is a basis for V .
2. If S is a linearly independent set (so [S[ n by Corollary 3.47), and
if moreover [S[ = n, then S is a basis for V .
Proof. As promised, the proof simply consists of combining previous results:
124 CHAPTER 3. VECTOR SPACES
1. Given S a spanning set with n elements, Proposition 3.37 shows that
some subset S
[ = n.
Since [S[ = n as well, we nd S
. Hence, as V is n-
dimensional, [S
= S, i.e.,
S is already a basis for V .
2
Remark 3.52. We have proved quite a few results in this section concerning
spanning sets, linearly independent set, and bases. It would be helpful to
summarize these results here. In what follows, V is, as usual, a vector space
over a eld F:
1. A basis for V is a subset of V that spans V and in which there is
no redundancy. Alternatively, a basis is a subset that spans V and is
linearly independent.
2. Bases always exist.
3. If one basis for V has an innite number of elements in it, then every
other basis for V must also have an innite number of elements. When
this occurs, we say V is innite dimensional.
4. If one basis for V has a nite number of elements n in it, then every
other basis must also have n elements. When this occurs, we say V if
nite-dimensional and we dene the dimension of V to be n.
5. Assume that V is of nite dimension n:
(a) Any spanning set S for V must contain at least n elements.
(b) Any spanning set S can be shrunk to a basis for V .
3.3. SUBSPACES AND QUOTIENT SPACES 125
(c) If a spanning set S has exactly n elements, then it is already a
basis for V .
(d) Any linearly independent set S must contain at most n elements.
(e) Any linearly independent set S can be expanded to a basis for V .
(f) If a linearly independent set S has exactly n elements in it, then
it is already a basis for V .
Of course, the statements in both (5b) and (5e) above hold even when
V is innite-dimensional.
3.3 Subspaces and Quotient Spaces
The idea behind subspaces is very similar to the idea behind subrings, while
the idea behind quotient spaces is very similar to the idea behind quotient
rings. (There is one key dierence: quotient rings are obtained by modding
out rings by ideals, modding out by subrings will not work. However, quo-
tient spaces can be made by modding out by subspaces. We will see this
later in the chapter.)
We will consider subspaces rst:
Denition 3.53. Given a vector space V over a eld F, a subspace of V
is a nonempty subset W of V that is closed with respect to vector addition
and scalar multiplication, such that with respect to this addition and scalar
multiplication, W is itself a vector space (that is, W satises all the axioms of
a vector space).
Now, we saw in the context of rings (Exercise 2.28 in Chapter 2) that
one could have a subset S of a ring R such that S is closed with respect to
addition and multiplication, and yet S is not a subring of R. It turns out
that in the case of vector spaces, it is enough for a (nonempty) subset W
of a vector space V to be closed with respect to vector addition and scalar
multiplicationW will then automatically satisfy all the axioms of a vector
space. This is the content of Theorem 3.55 below.
126 CHAPTER 3. VECTOR SPACES
But rst, a quick exercise, which is really a special case of Exercise 4.22
in Chapter 4 ahead:
Exercise 3.54. Let W be a subspace of the vector space V . Thus,
by denition (W, +) is an abelian group. Let 0
W
denote the identity
element of this group, and let 0
V
denote the usual 0 of V . Show
that 0
W
= 0
V
. (See also Exercise 2.29 in Chapter 2.)
Theorem 3.55. Let V be a vector space over a eld F, and let W be
a nonempty subset of V that is closed with respect to vector addition and
scalar multiplication. Then W is a subspace of V .
Proof. We need to check that all the axioms of a vector space hold. Let us
rst check that (W, +) is an abelian group. Vector addition in W is both
commutative and associative, since for any v
1
, v
2
, v
3
W, we may consider
v
1
, v
2
and v
3
to be elements of V , and in V , the relations v
1
+v
2
= v
2
+v
1
,
and (v
1
+v
2
) +v
3
= v
1
+(v
2
+v
3
) certainly hold. Next, given any v W, let
us show that v is also in W. For this we invoke that fact that W is closed
with respect to scalar multiplicationsince v W, 1 v is also in W, and
1 v is, of course, just v (see Remark 3.12 above). Now let us show that
0 is in W. Observe that so far, we have not used the hypothesis that W
is nonempty. (The proofs that we have given for the fact that addition in
W is associative and that every element in W has its additive inverse in W
hold vacuously even in the case where W is empty. For instance, the chain
of arguments v W 1 v W (as W is closed with respect to scalar
multiplication) v W is correct even when there is no vector v in W
to begin with!) Now let us use the fact that W is nonempty. Since W is
nonempty, it contains at least one vector, call it v. Then, by what we proved
above, v is also in W. Since W is closed under vector addition, v + (v)
is in W, and so 0 is in W. We have thus shown that (W, +) is an abelian
group.
It remains to be shown that the four axioms of scalar multiplication also
hold for W. But for any r and s in F and v and w in W, we may consider
v and w to be elements of V , and as elements of V , we certainly have the
3.3. SUBSPACES AND QUOTIENT SPACES 127
relations r (v +w) = r v +r w, (r +s) v = r v +s v, (rs) v = r (s v),
and 1 v = v. Hence, the axioms of scalar multiplication hold for W.
This proves that W is a subspace of V . 2
We have the following, which captures both closure conditions of the test
in Theorem 3.55 above:
Corollary 3.56. Let V be a vector space over a eld F, and let W be a
nonempty subset of V that is closed under linear combinations, i.e., for all
w
1
, w
2
in W and all f
1
, f
2
in F, the element f
1
w
1
+ f
2
w
2
is also in W.
Then W is a subspace of V Conversely, if W is a subspace, then W is closed
under linear combinations.
Proof. Assume that W is closed under linear combinations. Taking f
1
=
f
2
= 1, we nd that w
1
+ w
2
is in W for all w
1
, w
2
in W, i.e., W is closed
under addition. Taking f
2
= 0 we nd f
1
w
1
is in W for all w
1
in W and
all f
1
in F, i.e., W is closed under scalar multiplication. Thus, by Theorem
3.55, W is a subspace. Conversely, if W is a subspace, then for w
1
, w
2
in W
and all f
1
, f
2
in F, f
1
w
1
and f
2
w
2
are both in W because W is closed under
scalar multiplication, and then, f
1
w
1
+ f
2
w
2
is in W because W is closed
under vector addition. Hence, W is closed under linear combinations. 2
Here are some examples of subspaces. In each case, check that the con-
ditions of Theorem 3.55 apply.
Example 3.57. The set consisting of just the element 0 is a subspace.
Question 3.57.1. Why?
We refer to this as the zero subspace.
Example 3.58. If you think of R
2
as the vectors lying along the xy plane
of 3-dimensional xyz space, then R
2
becomes a subspace of R
3
.
128 CHAPTER 3. VECTOR SPACES
Example 3.59. For any nonnegative integers n and m with n < m, F
n
[x]
is a subspace of F
m
[x]. Also, F
n
[x] and F
m
[x] are both subspaces of F[x].
Example 3.60. U
n
(R) (the set of upper triangular n n matrices with
entries in R) is a subspace of the Rvector space M
n
(R).
Example 3.61. Q[
2,
3]. Of
course, we know very well by now that since Q Q[
2], Q[
2] is directly
a Qvector space. Both Qvector space structures on Q[
2] as a subspace of Q[
2,
2 of Q[
2+0
3+0
6 of Q[
2,
2+0
3+0
6 (= a+b
2) and a
+b
2+0
3+0
6
(= a
+b
2) is (a+a
)+(b+b
2+0
3+0
6 (= (a+a
)+(b+b
2). On
the other hand, viewing Q[
2 and a
+b
2 is also (a +a
) + (b +b
2. In a similar manner,
you can see that the rules for scalar multiplication are also identical.
Example 3.62. The example above generalizes as follows: Suppose F
K L are elds. The eld extension L/F makes L an Fvector space.
Since K is closed with respect to vector addition and scalar multiplication,
K becomes a subspace of L. But the eld extension K/F exhibits K directly
as an Fvector space. The two Fvector space structures on K, one that
we get from viewing K as a subspace of the Fvector space L and the other
that we get directly from the eld extension K/F, are the same.
Example 3.63. In Example 3.4 let
0
F denote the set of all innite
tuples (a
0
, a
1
, . . . ) in which only nitely many of the a
i
are nonzero. Then
0
F is a subspace of
0
F.
Exercise 3.63.1. Prove this!
Exercise 3.63.2. Show that the set S = e
0
, e
1
, e
2
, . . . is a basis
for
0
F? (Contrast this with Exercise 3.34.1 above.)
3.3. SUBSPACES AND QUOTIENT SPACES 129
This example is known as the direct sum of (countably innite) copies
of F.
Example 3.64. For any eld F, F[x
2
] (that is, the set of all polynomials
of the form
n
i=0
f
i
x
2i
, n 0) is a subspace of F[x].
Question 3.64.1. What is the dimension of this subspace? Can
you discover a basis for this subspace?
Example 3.65. Let V be a vector space over a eld F, and let S be any
nonempty subset of V .
Denition 3.65.1. The linear span of S is dened as the set
of all linear combinations of elements of S, that is, the set of all
vectors in V that can be written as c
1
s
1
+ c
2
s
2
+ + c
k
s
k
for
some integer k 1, some scalars c
i
, and some vectors s
i
S.
Exercise 3.65.1. Show that the linear span of S is a subspace of
V .
For instance, in R
3
, if we take S = i, j, then the linear span of S is
the set of all vectors in R
3
that are of the form ai +bj for suitable scalars a
and b, in other words, the xy-plane. As we saw in Example 3.58 above, the
xy-plane is a subspace of R
3
!
You should be able to do the following:
Question 3.66. Which of the following are subspaces of R
3
?
1. (a, b, c) [ a + 3b = c
2. (a, b, c) [ a = b
2
3. (a, b, c) [ ab = 0
We turn our attention now to quotient spaces. Recall how we constructed
the quotient ring R/I given a ring R and an ideal I: we rst dened an
equivalence relation on R by a b if and only if a b I (see page 57 in
Chapter 2). We found that the equivalence class of an element a is precisely
the coset a+I (Lemma 2.78 in that chapter). We then dened the ring R/I
130 CHAPTER 3. VECTOR SPACES
to be the set of equivalence class of R under the naturally induced denitions
[a] + [b] = [a + b] and [a][b] = [ab] (see Denition 2.79 in that chapter). Of
course, we had to check that our operations were well-dened and that we
indeed obtained a ring by this process (see Lemma 2.80 and Theorem 2.82
in that chapter). We will follow the same approach here.
So, given a vector space V over a eld F, and a subspace W, we dene
an equivalence relation on W by v w if and only if v w W. Exactly as
on page 57, we can see that this is indeed an equivalence relation. We dene
the coset a +W to be the set of all elements of the vector space of the form
a + w as w varies in W, and we call this the coset of W with respect to a
We have the following, whose proof is exactly as in Lemma 2.78 of Chapter
2 and is therefore omitted:
Lemma 3.67. The equivalence class [a] is precisely the coset a +W.
As with quotient rings, we will denote the set of equivalence classes of V
by V/W, whose members we will denote as both [a] and a+W. We dene an
addition operation on V/W and a scalar multiplication F V/W V/W
by the following:
Denition 3.68. [u] + [v] = [u +v] and f [u] = [f u] for all [u] and [v] in
V/W and all f in F. (In coset notation, this would read (u+W) +(v +W) =
(u +v) +W, and f(u +W) = fu +W.) As always, if the context is clear, we
will often omit the sign and write r[b] for r [b].
The following should now be easy, after your experience with quotient
rings (see Lemma 2.80 in Chapter 2):
Exercise 3.69. Show that the operations of addition and scalar
multiplication on V/W described above in Denition 3.68 are well-
dened. Show that the addition operation is commutative.
We now have the following:
Theorem 3.70. (V/W, +, ) is a vector space over F.
3.3. SUBSPACES AND QUOTIENT SPACES 131
Proof. As in Theorem 2.82 of Chapter 2, the proof involves checking that all
the vector space axioms of Denition 3.1 hold. The proof that (V/W, +) is
an abelian group is in fact identical to the proof that (R/I, +) is an abelian
group, and we will not do it here (see the remarks on page 154 on where
the similarity comes from). As for the axioms for scalar multiplication, let
us go through them one-by-one:
1. For all r F and [v], [w] V/W, we have r([v] + [w]) = r[v + w] =
[r(v+w)] = [rv+rw], where the rst and second equalities are because
of the way operations are dened on V/W and the last equality is
because r(v + w) = rv + rw is a property that holds in the original
vector space V . On the other hand, r[v]+r[w] = [rv]+[rw] = [rv+rw],
where the equalities are because of the way operations are dened on
V/W. Thus, both sides equal [rv + rw], so indeed r([v] + [w]) =
r[v] +r[w].
2. For all r, s F and [v] V/W, (r + s)[v] = [(r + s)v] = [rv + sv],
where the last equality is because of properties of the original vector
space V . On the other hand, r[v] + s[v] = [rv] + [sv] = [rv + sv]. It
follows that (r +s)[v] = r[v] +s[v].
3. For all r, s F and [v] V/W, (rs)[v] = [(rs)v] = [r(sv)], where the
last equality is because of properties of the original vector space V ,
while r(s[v]) = r[sv] = [r(sv)]. It follows that (rs)[v] = r(s[v]).
4. For all [v] V/W, 1[v] = [1v] = [v], where the last equality is because
1 v = v holds in V
2
Denition 3.71. (V/W, +, ) is called the quotient space of V by the subspace
W.
As with the case of quotient rings, the intuition behind V/W is that it
is a space formed by setting all elements of W to zero. More colloquially,
132 CHAPTER 3. VECTOR SPACES
the construction kills all elements in W, or divides out all elements in
W. This last description explains the term quotient space, and pushing
the analogy one step further, V/W can then be thought of as the set of all
remainders after dividing out by W, endowed with the natural quotient
binary operation and scalar multiplication of Denition 3.71.
For example, take V = R
3
and W to be the subspace consisting of all
vectors lying on the xy plane (Example 3.58 above). What sense do we make
of V/W? Every vector v in R
3
can be written as ai + bj + ck for unique
real numbers a, b, and c (see Example 3.27 above). Notice that both ai and
bj are in W. If we set these to zero we are left simply with ck which is
a vector lying on the z-axis. Moreover every vector ck lying on the z-axis
arises this way (why?) so we nd that V/W is precisely the z-axis. As in the
case of rings, this is more than just an equality of sets: this identication of
V/W with the z-axis preserves the vector space structure as well, which we
will make more precise in the next section.
The following lemma will be useful ahead. We will state the result only
for nite-dimensional vector spaces, although, the result (suitably phrased)
is true for innite-dimensional spaces as well (see Exercise 3.109):
Lemma 3.72. Let V be a nite-dimensional vector space over a eld F and
let W be a subspace. Let b
1
, . . . , b
m
be a basis for W. Expand this to a
basis b
1
, . . . , b
m
, b
m+1
, . . . , b
n
of V (see Theorem 3.49). Then the set (of
equivalence classes of vectors) b
m+1
+ W, . . . , b
n
+ W is a basis for the
quotient space V/W.
Proof. Given any v + W V/W, we may write v = r
1
b
1
+ + r
m
b
m
+
r
m+1
b
m+1
+ + r
n
b
n
for suitable scalars r
1
, . . . , r
n
. Since the vectors b
1
,
. . . , b
m
are in W, so is the vector r
1
b
1
+ +r
m
b
m
. Thus, v (r
m+1
b
m+1
+
+r
n
b
n
) W. But this just says that v+W = (r
m+1
b
m+1
+ +r
n
b
n
)+W.
Recalling how vector addition and scalar multiplication are dened in V/W,
we nd v + W = (r
m+1
b
m+1
+ + r
n
b
n
) + W = r
m+1
(b
m+1
+ W) + +
r
n
(b
n
+W). This shows that the set b
m+1
+W, . . . , b
n
+W spans V/W.
3.4. VECTOR SPACE HOMOMORPHISMS: LINEAR TRANSFORMATIONS133
As for the linear independence, assume that r
m+1
(b
m+1
+ W) + +
r
n
(b
n
+ W) = 0
V/W
for some scalars r
m+1
, . . . , r
n
. Since 0
V/W
is the class
of W, we nd r
m+1
b
m+1
+ + r
n
b
n
= w for some w W. But the set
b
1
, . . . , b
m
is a basis for W, so we may write w = r
1
b
1
+ + r
m
b
m
for
suitable scalars r
1
, . . . , r
m
. Putting this together, we nd r
1
b
1
+ +r
m
b
m
+
(r
m+1
)b
m+1
+ +(r
n
)b
n
= 0. Since the set b
1
, . . . , b
m
, b
m+1
, . . . , b
n
is
a basis of V , each r
i
(i = 1, . . . , n) must be zero. In particular, r
m+1
, . . . , r
n
must all be zero, proving the linear independence of b
m+1
+W, . . . , b
n
+W.
2
We get an easy corollary from this:
Corollary 3.73. Let V be a nite-dimensional vector space over a eld F
and let W be a subspace. Then dim(V ) = dim(W) + dim(V/W).
Proof. This is clear from the statement of the lemma above. 2
3.4 Vector Space Homomorphisms: Linear Trans-
formations
The ideas in this section parallel the development of ring homomorphisms
in Chapter 2. As in the passage from R to R/I, we notice some preservation
of structure when passing from V to V/W: the operations in V/W are
essentially the same as the operations in V except that the elements of
V have all been divided out by W. What this means is analogous to the
situation with R and R/I: let us denote by f the function f : V V/W
that pushes u V down to u+W. Since u+W = f(u), v +W = f(v),
and (u+v) +W = f(u+v), we nd f(u) +f(v) = f(u+v). The function f
that sends u to u +W, along with the property f(u) +f(v) = f(u +v) for
all u and v in V , precisely captures the notion that addition in V/W and V
are essentially the same.
134 CHAPTER 3. VECTOR SPACES
Similarly, the denition of scalar multiplication in V/W: r(u + W) =
ru +W (here r is in F) gives the feeling that scalar multiplication in V/W
is the same as the scalar multiplication in V except for dividing out by
W: once again this intuition is captured by the function f above along with
the property rf(u) = f(ru) for all r F and u in V .
Just as with rings we will turn this situation around. Suppose one has
a function f from one vector space V over F to another vector space X
over F (note that the set of scalars F is the same for both spaces) which
has the two properties described above, then one similarly gets the sense
that the vector space operations in the two F vector spaces V and X are
essentially the same except perhaps for dividing out by some subspace. In
analogy with rings, we should call this a vector space homomorphism, but
traditionally, such a function has been called a linear transformation:
Denition 3.74. Let V and X be two vector spaces over a eld F, and let
f : V X be a function. Suppose that f has the following properties:
1. f(u) +f(v) = f(u +v) for all u, v, in V ,
2. rf(u) = f(ru) for all r in F and u in V .
Then f is said to be a linear tranformation from V to X.
Remark 3.75. As with ring homomorphisms, there are some features of this
denition that are worth noting:
1. In the equation f(u) +f(v) = f(u+v), note that the operation on the
left side represents vector addition in the vector space X, while the
operation on the right side represents addition in the vector space V .
2. Similarly for the equation rf(u) = f(ru): the operation on the left
side represents scalar multiplication in X, while the operation on the
right side represents scalar multiplication in V .
3. By the very denition of a function, f is dened on all of V , how-
ever, the image of V under f need not be all of X i.e, f need not be
3.4. VECTOR SPACE HOMOMORPHISMS: LINEAR TRANSFORMATIONS135
surjective (see Example 3.83 or Example 3.84 for instance, although,
such examples are really very easy to write down). However, the im-
age of V under f is not an arbitrary subset of X, the denition of a
linear tranformation ensures that the image of V under f is actually
a subspace of X (see Lemma 3.88 later in this section).
4. Note that it is not necessary to stipulate that f(0
V
) = 0
X
since the
property holds automatically, see Lemma 3.77 below.
5. The condition (1) of the denition simply says that f should be a
group homomorphism from the group (V, +) to the group (X, +) (see
Denition 4.57 in Chapter 4 ahead), while the second condition (2)
says that the group homomorphism should, in addition, be F-linear.
The following lemma combines the two conditions in the denition of a
linear transformation into one:
Lemma 3.76. Let V and X be two F-vector spaces, and let f : V X be
a function that satises the property that f(r
1
v
1
+r
2
v
2
) = r
1
f(v
1
) +r
2
f(v
2
)
for all v
1
, v
2
in V and all r
1
, r
2
in F. Then f is a linear transformation.
Conversely, if f is a linear transformation, then f(r
1
v
1
+r
2
v
2
) = r
1
f(v
1
) +
r
2
f(v
2
) for all v
1
, v
2
in V and all r
1
, r
2
in F.
Proof. Assume that f satises the property that f(r
1
v
1
+r
2
v
2
) = r
1
f(v
1
) +
r
2
f(v
2
) for all v
1
, v
2
in V and all r
1
, r
2
in F. Taking r
1
= r
2
= 1, we see that
f(v
1
+v
2
) = f(v
1
)+f(v
2
), and taking r
2
= 0, we see that f(r
1
v
1
) = r
1
f(v
1
).
Thus, f is a linear transformation. As for the converse, if f is a linear
transformation, then for all v
1
, v
2
in V and all r
1
, r
2
in F, f(r
1
v
1
+r
2
v
2
) =
f(r
1
v
1
) +f(r
2
v
2
) = r
1
f(v
1
) +r
2
f(v
2
), as desired.
2
The following lemma is analogous to Lemma 2.90 in Chapter 2:
Lemma 3.77. Let V and X be two F-vector spaces, and let f : V X be
a linear tranformation. Then f(0
V
) = 0
X
.
136 CHAPTER 3. VECTOR SPACES
Proof. This proof is identical to the proof of the corresponding Lemma 2.90
in Chapter 2, (since, ultimately, these are both proofs that a group homo-
morphism from a group G to a group H maps the identity in G to the
identity in Hsee Lemma 4.59 in Chapter 4 ahead). We start with the fact
that f(0
V
) = f(0
V
+0
V
) = f(0
V
) +f(0
V
). We now have an equality in X:
f(0
V
) = f(0
V
) + f(0
V
). Since (X, +) is an abelian group, every element
of X has an additive inverse, so there is an element, denoted f(0
V
) with
the property that f(0
V
) + (f(0
V
)) = (f(0
V
)) + f(0
V
) = 0
X
. Adding
f(0
V
) to both sides of f(0
V
) = f(0
V
) +f(0
V
), we get f(0
V
) +f(0
V
) =
f(0
V
) + (f(0
V
) +f(0
V
)). The left side is just 0
X
, while by associativity,
the right side is (f(0
V
) +f(0
V
)) +f(0
V
) = 0
X
+f(0
V
). But by the de-
nition of 0
X
, 0
X
+ f(0
V
) is just f(0
V
). We thus nd 0
X
= f(0
V
), thereby
proving the lemma. 2
Remark 3.78. Here is another way to prove the statement of the lemma
above: Pick any v V . Then, 0
V
= 0
F
v, so f(0
V
) = f(0
F
v) = 0
F
f(v) =
0
X
. (Here, the rst equality is due to Remark 3.12.2, and the last but
one equality is because f(rv) = rf(v) for any scalar r since f is a linear
transformation.)
Before proceeding to examples of linear transformations, let us consider
one remaining object, analogous to the kernel of a ring homomorphism. The
concept of a linear transformation was introduced to capture the notion of
operations on two F-vector spaces being the same except for dividing out
by some subspace. Just as with ring homomorphisms, the natural candidate
for this subspace is the following:
Denition 3.79. Given a linear transformation f : V X between two F-
vector spaces, the kernel of f is the set u V [ f(u) = 0
X
. It is denoted
ker(f).
As in the case of kernels of ring homomorphisms, the following statement
should come as no surprise:
3.4. VECTOR SPACE HOMOMORPHISMS: LINEAR TRANSFORMATIONS137
Proposition 3.80. Let V and X be vector spaces over a eld F. The kernel
of a linear tranformation f : V X is a subspace of V .
Proof. By Corollary 3.56, it is sucient to check that ker(f) is a nonempty
subset of V that is closed under linear combinations. Since 0
V
ker(f)
(Lemma 3.77), ker(f) is nonempty. Now, for any w
1
, w
2
in ker(f) and any
r
1
, r
2
in F, we nd f(r
1
w
1
+r
2
w
2
) = r
1
f(w
1
) +r
2
f(w
2
) = r
1
0
X
+r
2
0
X
=
0
X
. Hence r
1
w
1
+r
2
w
2
is indeed in the kernel of f, so ker(f) is closed under
linear combinations.
2
Remark 3.81. As in the case of ring homomorphisms, for any linear trans-
formation f : V X between two F-vector spaces, we will have f(v) =
f(v). One proof is exactly the same as in Remark 2.91 in Chapter 2, and
this is not surprising: this is really a proof that in any group homomorphism
f from a group G to a group H, f(g
1
) will equal (f(g))
1
for all g G (see
Corollary 4.60 in Chapter 4). Another proof, of course, is to invoke scalar
multiplication and Remark 3.12.3: f(v) = f(1 v) = 1f(v) = f(v).
We are now ready to study examples of linear transformations. The
rst example is really the master-example: it provides an algorithm for
constructing linear transformations and leads to matrix representations of
linear transformations that are useful for computations:
Example 3.82. Master-Example of Linear Transformation: Let V be an
F-vector space that is (for simplicity) nite-dimensional, and let b
1
, . . . , b
n
2, 1/
2) of R
2
(see Example 3.26) and the basis
(1, 0, 0), (0, 1, 0), (0, 0, 1) of R
3
? (Hint: What does f do
to w?)
Question 3.82.2. What are the coordinates, in the standard basis
for R
3
(see Example 3.27), of the vector xi +yj, after it undergoes
the linear transformation f : R
2
R
3
given by the matrix
_
_
a b
c d
e f
_
_
where the matrix is written with respect to the basis i, w =
(1/
2, 1/
2) of R
2
and the basis (1, 0, 0), (1, 1, 0), (0, 1, 1)
of R
3
? (See Exercise 3.27.1 for why (1, 0, 0), (1, 1, 0), (0, 1, 1)
is a basis of R
3
.)
Question 3.82.3. How will the treatment in this example change
if either V or X (or both) were to be innite-dimensional F-vector
spaces? (See the remarks on page 155 in the notes for some hints.)
Example 3.83. Let V be an F-vector space. The map f : V V that
sends any v V to 0 is a linear transformation.
Question 3.83.1. If V is n-dimensional with basis b
1
, . . . , b
n
,
what is the matrix of f with respect to this basis?
Example 3.84. Let V be an F-vector space, and let W be a subspace. The
map f : W V dened by f(w) = w is a linear transformation.
3.4. VECTOR SPACE HOMOMORPHISMS: LINEAR TRANSFORMATIONS143
Question 3.84.1. Assume that W is m-dimensional and V is n-
dimensional. Pick a basis B = b
1
, . . . , b
m
of W and expand to a
basis C = b
1
, . . . , b
m
, b
m+1
, . . . , b
n
of V . What is the matrix of
f with respect to the basis B of W and the basis C of V ?
Example 3.85. Let F be a eld, and view M
n
(F) as a vector space over
F (see Example 3.5). Now view F as an F-vector space (see Example
3.7: note that F is trivially an extension eld of F). Then the function
f : M
n
(F) F that sends a matrix to its trace is a linear tranformation.
(Recall that the trace of a matrix is the sum of its diagonal entries.)
To prove this, note that this is really a function that sends basis vectors
of the form e
i,i
to 1 and e
i,j
(i ,= 0, j ,= 0) to 0, and an arbitrary matrix
i,j
m
i,j
e
i,j
to m
1,1
1+ +m
n,n
1. Now apply Lemma 3.82.1 to conclude
that f must be a linear transformation.
See Exercise 3.100 at the end of the chapter.
Example 3.86. Let V be a vector space over a eld F and let W be a sub-
space. Assume that V is nite-dimensional (for simplicity). Let dim
F
(V ) =
n, and dim
F
(W) = m. Let b
1
, . . . , b
m
be a basis for W, and let us expand
this to a basis b
1
, . . . , b
m
, b
m+1
, . . . , b
n
(see Theorem 3.49). Given any
v V , we may therefore write v = f
1
b
1
+ +f
m
b
m
+f
m+1
b
m+1
+ f
n
b
n
f(u +ker(f)) =
f(v +ker(f)), i.e.,
f is well-dened.
Now let us apply Lemma 3.76: We have
f (r
1
(v
1
+ker(f)) +r
2
(v
2
+ker(f))) =
f ((r
1
v
1
+ker(f)) + (r
2
v
2
+ker(f))) =
f ((r
1
v
1
+r
2
v
2
) +ker(f)) = f(r
1
v
1
+
r
2
v
2
) = r
1
f(v
1
) + r
2
f(v
2
) = r
1
f(v
1
+ ker(f)) + r
2
f(v
2
+ ker(f)). Hence
f
is a linear transformation.
3.4. VECTOR SPACE HOMOMORPHISMS: LINEAR TRANSFORMATIONS147
Exercise 3.94.1. Justify all the equalities above.
We check that
f is surjective as a function from V/ker(f) to f(V ). Note
that any element of f(V ) is, by denition, of the form f(v) for some v V .
But then, by the way we have dened
f, we nd f(v) =
f(v + ker(f)), so
indeed
f is surjective.
Finally, we check that
f is injective. Suppose that v+ker(f) is in ker(
f).
Thus,
f(v +ker(f)) = 0
X
. Since
f(v +ker(f)) = f(v), we nd f(v) = 0
X
.
Hence v ker(f). But this means that the coset v +ker(f) equals the coset
ker(f) (why?), so v + ker(f) is the zero element of V/ker(f). Thus
f is
injective.
Putting this together, we nd that
f provides an isomorphism between
V/ker(f) and f(V ).
2
We now study the relation between the dimensions of V , ker(f) and
f(V ) in the case where V is nite-dimensional. But rst, let us state a
consequence of Lemmas 3.90 and 3.91:
Corollary 3.95. Let V and X be vector spaces over a eld F and let f :
V X be a linear transformation. If f is an isomorphism between V and
X, then f sends any basis of V to a basis of X.
Exercise 3.95.1. Convince yourselves that this follows from Lem-
mas 3.90 and 3.91!
We are now ready to prove:
Theorem 3.96. Let V and X be vector spaces over a eld F and let f :
V X be a linear transformation. Assume that V is nite-dimensional.
Then dim
F
(V ) = dim
F
(f(V )) + dim
F
(ker(f)).
Proof. The proof is a combination of Theorem 3.94, Lemma 3.72, and Corol-
lary 3.95. Start with a basis b
1
, . . . , b
m
of ker(f), and expand this to
a basis b
1
, . . . , b
m
, b
m+1
, . . . , b
n
of V . (Thus, dim
F
(ker(f)) = m and
148 CHAPTER 3. VECTOR SPACES
dim
F
(V ) = n.) Then, according to that lemma, the set b
m+1
+ker(f), . . . , b
n
+
ker(f) is a basis for V/ker(f). By Theorem 3.94, the function
f : V/ker(f)
f(V ) dened by
f(v + ker(f)) = f(v) is an isomorphism, so by Corollary
3.95 the set of vectors
f(b
m+1
+ker(f)), . . . ,
f(b
n
+ker(f)) forms a basis
for f(V ). In particular, the dimension of f(V ) must be the size of this set,
which is n m. It follows that dim
F
(V ) = dim
F
(f(V )) + dim
F
(ker(f)).
2
3.5 Further Exercises
Exercise 3.97. Starting from the vector space axioms, prove that the proper-
ties listed in Remark 3.12 hold for all vector spaces. (Hint: You should get ideas
from the solutions to the corresponding Exercise 2.114 of Chapter 2: the proofs
of the rst three properties are quite similar in spirit. As for the last property,
look to f
1
for help!)
Exercise 3.98. Prove that the polynomials 1, 1 + x, (1 + x)
2
, (1 + x)
3
, . . .
also form a basis for R[x] as a Rvector space. (Hint: To show that these
polynomials span R[x], it is sucient to show that the polynomials 1, x, x
2
,
. . . are in the linear span (see Example 3.65 above) of 1, 1 + x, (1 + x)
2
,
(1 + x)
3
, . . . (Why?) The vector 1 is of course in the linear span. Assuming
inductively that the vectors 1, x, . . . , and x
n1
are in the linear span, show that
x
n
is also in the linear span by considering the binomial expansion of (1 +x)
n
.
As for linear independence, suppose that
n
i=0
d
i
(1 + x)
i
= 0. You may assume
that d
n
,= 0 (why?) Now expand each term (1 + x)
i
above and consider the
coecient of x
n
. What do you nd?)
If you nd the hint too computational, you can also establish this result by
invoking Exercise 3.106 ahead and Exercise 2.109.2 in Chapter 2. (However,
note that Exercise 2.109.2 in turn is computational, so this merely shifts all the
computations to a dierent place!)
Exercise 3.99. Show that the matrices e
i,j
and
2e
i,j
(1 i, j 2) form a
basis for M
2
(Q[
2e
i,j
is the 22 matrix
with
2 in the (i, j) slot, and zeros in the remaining slots.) Now discover a
basis for M
2
(C) considered as a vector space over R.
3.5. FURTHER EXERCISES 149
Exercise 3.100. Show that the set of all matrices in M
n
(R) whose trace is
zero is a subspace of M
n
(R) by exhibiting this space as the kernel of a suitable
homomorphism that we have considered in the text. Use Theorem 3.96 to prove
that this subspace has dimension n
2
1. Discover a basis for this subspace.
Exercise 3.101. Let V be an F-vector space. So far, we have considered
individual linear tranformations of the form f : V V ; this exercise deals
with the collection of all such F-linear transformations. Let End
F
(V ) denote
the set of all F-linear transformations from V to V . (End is short for the
word endomorphism, which is another word for a homomorphism from one
(abelian) group to itself, while the subscript F indicates that we are considering
those (abelian) group homomorphisms that are in addition F-linearsee (5) in
Remark 3.75 earlier in this chapter.)
1. Let f and g be two elements in End
F
(V ). Consider the function, sugges-
tively denoted f+g that is obtained by dening (f+g)(v) = f(v)+g(v).
Show that f+g is also an F-linear transformation, and hence is an element
of End
F
(V ).
2. Show that End
F
(V ), with this denition of addition of two linear trans-
formations, is an abelian group. What is the identity element in this
group? How do you dene the inverse with respect to addition of any
f End
F
(V )?
3. Let f g denote the usual composition of functions on V , dened by
(f g)(v) = f(g(v)). Show that f g is also an F-linear transformation,
and hence is an element of End
F
(V ).
4. Show that by thinking of function composition as a multiplication
operation on End
F
(V ), the set (End
F
(V ), +, ) becomes a ring. What
is the multiplicative identity in this ring? Is this ring commutative? (What
if the dimension of V is 1?)
Exercise 3.102. Prove that an element f End
F
(V ) (see Exercise 3.101
above) is invertible if and only if f is an isomorphism. (Hint: For one direction
of this problem, Remark 3.93.1 and Exercise 3.93.1 may be helpful.)
Exercise 3.103. Now that you have shown that End
F
(V ) is a ring in Exercise
3.101 above, here is an example that shows ab = 1 doesnt imply ba = 1 in an
arbitrary ring! (See Denition 2.44 in Chapter 2.)
Let V be a vector space with a countably innite basis v
i
, i Z. (For
example, see Exercise 3.63.2 earlier in this chapter.) Let T be the F-linear
150 CHAPTER 3. VECTOR SPACES
transformation that sends v
i
to v
i+1
for i = 1, 2, . . . , and let S be the linear
transformation that sends v
i
to v
i1
for i = 1, 2, . . . with the understanding
that v
0
means the zero vector. (Why are these linear transformations? See the
remarks on page 155 on how to dene linear transformations between innite-
dimensional spaces.) Show that in the ring End
F
(V ), the product ST = 1 but
the product TS sends v
1
to zero and hence is not 1.
Exercise 3.104. Let V be an F-vector space of dimension n with basis
b
1
, . . . , b
n
. Recall from Example 3.82 how one can assign to each F-linear
transformation T on V the (n n) matrix of T with respect to the basis
b
1
, . . . , b
n
. Write M
T
for the matrix in M
n
(F) that corresponds to T under
this assignment. Study the addition and multiplication operations on End
F
(V )
in Exercise 3.101 above, and prove that the map M : End
F
(V ) M
n
(F) that
sends T to M
T
provides a ring isomorphism between End
F
(V ) and M
n
(F).
Exercise 3.105. Let K/F be a eld extension. By Example 3.7, K may be
viewed as an F-vector space. Assume that the dimension of K as an F-vector
space is n. This exercise shows how K may be realized as a subring of M
n
(F),
thus generalizing Example 2.108 in Chapter 2.
1. For each k K, write l
k
for the map from K to K that sends any x K
to kx. Show that l
k
is an F-linear transformation from K to K.
2. Recall from Exercise 3.101 that End
F
(V ), the set of all F-linear trans-
formations of an F-vector space V , is a ring, under the operation of
composition of functions. In particular, viewing K as an F-vector space,
End
F
(K) is a ring, and the linear transform l
k
of Part (1) above is an
element of this ring. Show that the map l : K End
F
(K) that sends
k K to the linear transform l
k
is an injective ring homomorphism from
K to End
F
(K).
3. Let b
1
, . . . , b
n
K be an F-basis of K. The linear transformation l
k
corresponds to a matrix M
l
k
with respect to the basis b
1
, . . . , b
n
K
(as in Example 3.82). Show that the map from K to M
n
(F) that sends
k to M
l
k
is a ring homomorphism.
(Hint: By Exercise 3.104 above, End
F
(K) is isomorphic to M
n
(F) via
the map M that sends a linear transform T to its matrix M
T
written in
the basis b
1
, . . . , b
n
. Compose the map l : K End
F
(K) with the
map M : End
F
(K) M
n
(F).)
4. Show that this ring homomorphism in (3) above is injective. Conclude
that K is isomorphic to a subring of M
n
(F) using Lemma 2.103 and
Theorem 2.110 of Chapter 2.
3.5. FURTHER EXERCISES 151
The image of K under the homomorphism in (3) above is called the regular
representation of K in M
n
(F).
Exercise 3.106. Let R be a ring containing a eld F, so R is an F-vector
space (see Example 3.8 earlier in this chapter). Let f : R R be a ring
isomorphism that acts as the identity on F (i.e., f(r) = r for all r F). Show
that if B R is an F-basis of R, then the set f(B) = f(b) [ b B is also
an F-basis of R.
Exercise 3.107. Recall from Exercise 2.123 in Chapter 2 that the set S of all
functions from R to R is a ring under the operation of pointwise addition and
multiplication of functions. Since, by that same exercise, the set of constant
functions is a subring of S that is isomorphic to R, S carries the natural structure
of a R-vector space. (Explicitly, the vector space structure is given by the map
R S S that sends (r, f) to the function s
r
f, where s
r
is as in Exercise
2.123. More simply, however, the product of the real number r and the function
f(x) is the function, suggestively denoted r f, dened by (r f)(x) = rf(x).)
1. Which of the following are subspaces of S?
(a) f R [ f(1) = 0
(b) f R [ f(0) = 1
(c) The set of all constant functions.
(d) f R [ f(x) 0 for all x R
2. Show that the set 1, sin
2
(x), cos
2
(x) is linearly dependent.
3. Is the set e
x
, 1, x, x
2
, x
3
, . . . linearly dependent or independent?
Exercise 3.108. Prove Proposition 3.49 without the assumption that V is
nite-dimensional. (See the notes on page 224 in Chapter B in the Appendix
for hints.)
Exercise 3.109. This exercise shows that Lemma 3.72 holds even for innite-
dimensional spaces. Let V be a vector space over a eld F and let W be a
subspace. Let B be a basis for W. Expand this to a basis S of V (see Proposition
3.49, as well as the remarks on page 224 in Chapter B in the Appendix). Write
T for S B (so S is the disjoint union of B and T). Prove that the set (of
equivalence classes of vectors) t +W [ t T is a basis for the quotient space
V/W.
152 CHAPTER 3. VECTOR SPACES
Exercise 3.110. If V is a nite-dimensional vector space and if W is a sub-
space of V , prove that the dimension of W is no bigger than the dimension of
V . Now prove that if the dimension of W and V are equal, then W = V .
Exercise 3.111. Let V be a vector space over a eld F, and let U and W be
two subspaces.
1. Show that U W is a subspace of V . (Is U W a subspace of V ?)
2. Denote by U + W the set u + w [ u U and w W. Show that
U +W is a subspace of V .
3. Now assume that V is nite-dimensional. The aim of this part is to
establish the following:
dim(U +W) = dim(U) + dim(W) dim(U W)
(a) Let v
1
, . . . , v
p
be a basis for UW (so dim(UW) = p). Expand
this to a basis v
1
, . . . , v
p
, u
1
, . . . , u
q
of U, and also to a basis
v
1
, . . . , v
p
, w
1
, . . . , w
r
of W (so dim(U) = p + q and dim(W) =
p + r). Show that the set B = v
1
, . . . , v
p
, u
1
, . . . , u
q
, w
1
, . . . , w
r
spans U +W.
(b) Show that the set B is linearly independent. (Hint: Assume that we
have the relation f
1
v
1
+ + f
p
v
p
+ g
1
u
1
+ + g
q
u
q
+ h
1
w
1
+
+h
r
w
r
= 0. Rewrite this as g
1
u
1
+ +g
q
u
q
= (f
1
v
1
+ +
f
p
v
p
+h
1
w
1
+ +h
r
w
r
). Observe that the left side is in U while
the right is in W, so g
1
u
1
+ + g
q
u
q
must be in U W. Hence,
g
1
u
1
+ + g
q
u
q
= j
1
v
1
+ + j
p
v
p
for some scalars j
1
, . . . , j
p
.
Why does this show that the g
i
must be zero? Now proceed to show
that the f
i
and the h
i
must also be zero.)
(c) Conclude that dim(U +W) = dim(U) + dim(W) dim(U W).
(d) Prove that any two 2-dimensional spaces of R
3
must intersect in a
space of dimension at least 1.
Exercise 3.112. Show that the nth Bernstein Poylnomials B
(n)
i
(x) =
_
n
i
_
x
i
(1
x)
ni
, (i = 0, 1, . . . , n) form a basis for R
n
[x] (n 1) as follows:
1. Show that 1 =
n
i=0
B
(n)
i
.
2. The equation in part 1 above continues to hold if we replace n by n 1
everywhere. (Why?) Make this replacement, multiply throughout by x,
and derive the relation x =
n
i=0
(i/n)B
(n)
i
. (Hint: you will need to use
the relation
_
n1
i1
_
= (i/n)
_
n
i
_
. Why does this last relation hold?)
3.5. FURTHER EXERCISES 153
3. Similarly, for k = 2, . . . , n1, show that x
k
=
n
i=0
(i(i 1) (i k +
1)/n(n 1) (n k + 1))B
(n)
i
.
4. Now conclude that the B
(n)
i
span R
n
[x].
5. Use Proposition 3.51 above to conclude that the B
(n)
i
form a basis.
These Bernstein polynomials nd applications in diverse areas of mathematics,
as well as in various applied elds, such as computer graphics! For instance, in
advanced calculus, they are useful in showing that any continuous function on
an interval [a, b] can be approximated arbitrarily closely by a polynomial func-
tion. (This is known as the Weierstrass Approximation Theorem.) In computer
graphics, they are used to t, through a given set of points, a curve that is
smooth and has minimal wiggle,and as well, to provide convenient handles
by which the user can then control the shape of this curve.
Notes
Remarks on Example 3.5 It is worth remarking that our denition of scalar
multiplication is a very natural one. First, observe that we can consider R to be a
subring of M
n
(R) in the following way: the set of matrices of the form diag(r), as
r ranges through R, is essentially the same as R (see Example 2.106 in Chapter 2).
(Observe that this makes the set of diagonal matrices of the form diag(r) a eld in
its own right!) Under this identication of r R with diag(r), what is the most
natural way to multiply a scalar r and a vector (a
i,j
)? Well, we think of r as diag(r),
and then dene r (a
i,j
) as just the usual product of the two matrices diag(r) and
(a
i,j
). But, as you can check easily, the product of diag(r) and (a
i,j
) is just (ra
i,j
)!
It is in this sense that our denition of scalar multiplication is naturalit arises
from the rules of matrix multiplication itself. Notice that once R has been identied
with the subring of M
n
(R) consisting of the set of matrices of the form diag(r), this
example is just another special case of Example 3.8.
Remarks on Example 3.10 (V, +) remains an abelian group. This does not
change when we restrict our attention to the subeld F. So we only need to worry
about what the new scalar multiplication ought to be. But there is a natural way
to multiply any element f of F with any element v of V : simply consider f as an
element of K, and use the multiplication already dened between elements of K
154 CHAPTER 3. VECTOR SPACES
and elements of V ! The scalar multiplication axioms clearly hold: for any f and g
in F and any v and w in V , we may rst think of f and g as elements of K, and
since the scalar multiplication axioms hold for V viewed as a vector space over K,
we certainly have f (v+w) = f v+f w, (f +g) v = f v+g v, (fg) v = f (g v),
and 1 v = v.
Remarks on Example 3.34 This example is a bit tricky. Why are the e
i
not a
basis? They are certainly linearly independent, since if
n
i=0
c
i
e
i
= 0 for some scalars
c
i
F, then the tuple (c
0
, c
1
, . . . , c
n
, 0, 0, . . . ) must be zero, but a tuple is zero if
and only if each of its components is zero. Thus, each of c
0
, c
1
, . . . , c
n
must be
zero, proving linear independence. However, the e
i
do not span
0
F, contrary to
what one might expect. To understand this, let us look at something that has been
implicit all along in the denition of linear combination. The e
i
would span
0
F
if every vector in
0
F could be written as a linear combination of elements of
the set e
0
, e
1
, e
2
, . . . . Now notice that whenever we consider linear combinations,
we only consider sums of a nite number of terms. Hence, a linear combination of
elements of the set e
0
, e
1
, e
2
, . . . looks like c
i1
e
i1
+ c
i2
e
i2
+ + c
in
e
in
for some
nite n. It is clear that any vector that is expressible in such a manner will have
only nitely many components that are nonzero. (These will be at most the ones
at the slots i
1
, i
2
, . . . , i
n
; all other components will be zero.) Consequently, the
vectors in
0
F in which innitely many components are nonzero (for example,
the vector (1, 1, 1, . . . )), cannot be expressed as linear combinations of the e
i
.
On the other hand, see Exercise 3.63.2.
It is worth pointing out that innite sums have no algebraic meaning. Addition
is, to begin with, a binary operation, that is, it is a rule that assigns to a
1
and a
2
the element a
1
+a
2
. This can be extended inductively to a nite number of a
i
: for
instance, the sum a
1
+a
2
+a
3
+a
4
+a
5
is dened as (((a
1
+a
2
) +a
3
) +a
4
) +a
5
.
(In other words, we rst determine a
1
+a
2
, then we add a
3
to this, then a
4
to what
we get from adding a
3
, and then nally a
5
to what we got at the previous step.)
While this inductive denition makes sense for a nite number of terms, it makes
no sense for an innite number of terms. To interpret innite sums of elements, we
really need to have a notion of convergence (such as the ones you may have seen in
a course on real analysis). Such notions may not exist for arbitrary elds.
3.5. FURTHER EXERCISES 155
Remarks on the proof Theorem 3.70 The reason why the proofs that
(V/W, +) and (R/I, +) are abelian groups are so similar is that what we are essen-
tially proving in both is that if (G, +) is an abelian group and if H is a subgroup,
then the set of equivalence classes of G under the relation g
1
g
2
if and only if
g
1
g
2
H with the operation [g
1
] +[g
2
] = [g
1
+g
2
] is indeed an abelian group in
its own right! We will take this up in Chapter 4 ahead.
Remarks on linear transformations f : V X when V or X are
not necessarily nite-dimensional Similar considerations will apply: we
let S = b
[ B
be arbitrary vectors in X. Every vector in V can be uniquely written as r
1
b
1
+
+r
k
b
k
, where the r
i
are scalars from the eld F and b
1
, . . . , b
k
is some nite
subset of S. Then, just as in the nite-dimensional case, the function f : V X
that sends r
1
b
1
+ r
k
b
k
to r
1
w
1
+ r
k
w
k
is a linear transformation, and all
linear transformations from V to X are given in this way. Let T = c
[ C
be a given basis of X (again, C is some index set). The matrix representation of
f with respect to the basis S of V and T of X is a [B[ [C[ matrix (where [B[
and [C[ are the cardinality of the possibly innite sets B and C), with the rows
indexed by the basis vectors in T and the columns indexed by the basis vectors
in S. The column with index represents the image of b
under f, written as a
column vector, whose entry in the row indexed by is the coecient of c
(in the
expression of f(b
).
156 CHAPTER 3. VECTOR SPACES
Chapter 4
Groups
4.1 Groups: Denition and Examples
Of all the algebraic objects that we have considered in this coursegroups,
rings, elds, and vector spacesgroups are technically the most elementary:
they are sets with just one binary operation, and there are just three axioms
that govern them: (i) the binary operation should be associative, (ii) there
should be an identity for this operation, and (iii) every element should have
an inverse with respect to this operation (See Denition 2.2 in Chapter 2).
Yet, we have reserved our study of groups to the last and have started with
rings instead. The primary reason for this is that even if they are technically
more complicated than groups, rings are a much more familiar object to
most students who are seeing abstract algebra for the rst time: after all,
the number systems that we have grown up with and are so intimate
with, namely the integers, the rationals, the reals, and the complexes, are
all examples of rings. Rings are thus, for many, a natural entry point into
algebra. In the same vein, examples like R
2
and R
3
make vector spaces also
a familiar object, and their study is therefore a natural candidate to follow
our study of rings.
However, let neither their elementary denition nor the location of this
157
158 CHAPTER 4. GROUPS
chapter in this book lull you into underestimating the importance of groups:
groups are vitally important in mathematics, and they show up in just about
every nook and corner of the subject. Although this may not be obvious
from the examples that we have seen so far (which have all been groups
of the form (R, +), where R is a ring and + is the addition operation on
the ring, or of the form (R
3
is another permutation of
3
: we computed this out explicitly above, but
we already would have known this from an earlier exposure to functions: if
f : S S and g : S S are functions (here S is some set) and if both f
and g are bijective, then both compositions (g f) and (f g) from S to S
are also bijective, (ii) Composition of functions is an associative operation:
this too would be familiar to us from our earlier exposure to functions: if
f : S S, g : S S, and h : S S are three functions on some set
S, then for any s S, ((f g) h)(s) = (f g)(h(s)) = f(g(h(s))), while
(f (g h))(s) = f((g h)(s)) = f(g(h(s))), so indeed (f g) h = f (g h),
(iii) The permutation id acts as the identity element: this is clear from the
rst row and the rst column of the table above, and nally, (iv) Every
4.1. GROUPS: DEFINITION AND EXAMPLES 161
permutation of S
3
has an inverse: r
1
r
2
= r
2
r
1
= id, id id = id,
f
1
f
1
= id, f
2
f
2
= id and f
3
f
3
= id. Hence, the set of permutations
of
3
forms a group under composition. We denote this group as S
3
, and
call it the symmetric group on three elements. (S
3
can be interpreted as the
set of symmetries of
3
with the trivial structure: see Example 4.88 in the
notes at the end of the chapter.)
Observe something about this group: it is not a commutative group! For
instance, as we observed above, r
1
f
1
= f
3
while f
1
r
1
= f
2
. We say that
the group is nonabelian.
From now on we will suppress the symbol, and simply write fg for
the composition f g. Not only is there less writing involved, but it is
notation that we are used to: it is the notation we use for multiplication.
Continuing the analogy, we write f f as f
2
, and so on, and we sometimes
write 1 for the identity (see Remark 4.14 ahead for more on the notation
used for the identity and the group operation). In this notation, note that
r
3
1
= r
3
2
= 1, f
2
1
= f
2
2
= f
2
3
= 1.
The table such as the one above that describes how pairs of elements in
a group compose under the given binary operation is called the group table
for the group.
Exercise 4.2.1. Use the group table to show that every element
of S
3
can be written as r
i
1
f
j
1
for uniquely determined integers i
0, 1, 2 and j 0, 1.
Example 4.3. Just as we considered the set of permutations of the set
3
= 1, 2, 3 above, we can consider for any integer n 1, the permutations
of the set
n
= 1, 2, . . . , n. This set forms a group under composition,
just as S
2
and S
3
did above.
Denition 4.3.1. The set of permutations of
n
, which forms
a group under composition, is denoted S
n
and is called the sym-
metric group on n elements.
162 CHAPTER 4. GROUPS
Exercise 4.3.1. Write down the set of permutations of the set
2
= 1, 2 and construct the table that describes how the permu-
tations compose. Verify that the set of permutations of
2
forms
a group. Is it abelian? This group is denoted S
2
, and called the
symmetric group on two elements.
Exercise 4.3.2. Compare the group table of S
2
that you get in
the exercise above with the table for (Z/2Z, +) on page 36. What
similarities do you see?
Exercise 4.3.3. Prove that S
n
has n! elements.
Exercise 4.3.4. Find an element g S
n
such that g
n
= 1 but
g
t
,= 1 (see Remark 4.14 ahead on notation for the identity element)
for any positive integer t < n.
Here is an alternative notation that is used for a special class of per-
mutations, which we will call the cycle notation: Working for the sake of
concreteness in
5
, consider the permutation that sends 1 to 3, 3 to 4, and
4 back to 1, and acts as the identity on the remaining elements 2 and 5.
(This is the permutation we have denoted up to now as
_
1 2 3 4 5
3 2 4 1 5
_
.) Notice
the cyclic nature of this permutation: it moves 1 to 3 to 4 back to 1, and
leaves 2 and 5 untouched. We will use the notation (1, 3, 4) for this special
permutation and call it a 3-cycle. In general, if a
1
, . . . , a
d
are distinct
elements of the set
n
(so 1 d n), we will denote by (a
1
, a
2
, . . . , a
d
)
the permutation that sends a
1
to a
2
, a
2
to a
3
, . . . , a
d1
to a
d
, a
d
back to
a
1
, and acts as the identity on all elements of
n
other than these a
i
. We
will refer to (a
1
, a
2
, . . . , a
d
) as a d-cycle or a cycle of length d. A 2-cycle
(a
1
, a
2
) is known as a transposition, since it only swaps a
1
and a
2
and
leaves all other elements unchanged. Of course a 1-cycle (a
1
) is really just
the identity element since it sends a
1
to a
1
and acts as the identity on all
other elements of
n
.
Notice something about cycles: the cycle (1, 3, 4) is the same as (3, 4, 1),
as they both clearly represent the same permutation. More generally, the cy-
cle (a
1
, a
2
, . . . , a
d
) is the same as (a
2
, a
3
, . . . , a
d
, a
1
), which is the same as
4.1. GROUPS: DEFINITION AND EXAMPLES 163
(a
3
, a
4
, . . . , a
d
, a
1
, a
2
), etc. We will refer to these dierent representations
of the same cycle as internal cyclic rearrangements.
Since a d-cycle is just a special case of a permutation, it makes perfect
sense to compose a d-cycle and an e-cycle: it is just the composition of
two (albeit special) permutations. For instance, in any S
n
(for n 3), we
have the relation (1, 3)(1, 2) = (1, 2, 3) (check!). (We will see shortly
Corollary 4.3.1 aheadthat every permutation in S
n
can be factored into
transpositions.)
Exercise 4.3.5. Write the 4-cycle (1, 2, 3, 4) of S
n
(here n is at
least 4) as a product of three transpositions.
Exercise 4.3.6. Show that any k cycle in S
n
(here n k 2)
can be written as the product of k 1 transpositions.
Two cycles (a
1
, . . . , a
d
) and (b
1
, . . . , b
e
) are said to be disjoint if none
of the integers a
1
, . . . , a
d
appear among the integers b
1
, . . . , b
e
and none of
the integers b
1
, . . . , b
e
appear among the integers a
1
, . . . , a
d
. For example,
in S
6
, the cycles s = (1, 4, 5) and t = (2, 3) are disjoint. Notice something
with this pair of permutations: s and t commute! Let us rewrite s and t in
the stack notation and compute:
st =
_
1 2 3 4 5 6
4 2 3 5 1 6
_ _
1 2 3 4 5 6
1 3 2 4 5 6
_
=
_
1 2 3 4 5 6
4 3 2 5 1 6
_
ts =
_
1 2 3 4 5 6
1 3 2 4 5 6
_ _
1 2 3 4 5 6
4 2 3 5 1 6
_
=
_
1 2 3 4 5 6
4 3 2 5 1 6
_
This computation is of course very explicit, but the intuitive idea behind
why s and t commute is the following: s only moves the elements 1, 4,
and 5 among themselves, and in particular, it leaves the elements 2 and 3
untouched. On the other hand, t swaps the elements 2 and 3, and leaves
the elements 1, 4 and 5 untouched. Since s and t operate on disjoint sets
of elements, the action of s is not aected by t and the action of t is not
eected by s. In particular, it makes no dierence whether we perform s
rst and then t or the other way around.
Essentially these same ideas lead to the following:
164 CHAPTER 4. GROUPS
Lemma 4.3.1. Let s and t be any two disjoint cycles in S
n
. Then s and t
commute.
Exercise 4.3.7. Prove this assertion carefully by writing s =
(a
1
, . . . , a
d
) and t = (b
1
, . . . , b
e
) for disjoint integers a
1
, . . . ,
a
d
, b
1
, . . . , b
d
, and writing out the eect of both st and ts on each
integer 1, . . . , n. (See the notes on page 210 for some hints.)
Now let us consider another feature of permutations: it turns out that
any permutation can be decomposed into a product of disjoint cycles! To
take an example, consider the permutation s =
_
1 2 3 4 5 6
3 2 1 6 4 5
_
. Let us take the
element 1 and follow it under repeated action of s: 1 goes to 3 which goes
back to 1. Thus, the eect of s on the subset 1, 3 is to act as a swap, or
a transposition. Now pick another element not equal to either 1 or 3, say 2,
and follow it under repeated action of s: 2 stays untouched. Thus, the eect
of s on the subset 2 is to act as the identity. So now, pick an element
not equal to either 1 or 3 or 2, say 4: we nd 4 goes to 6 which goes to 5
which then goes back to 4. Hence, the eect of s on the subset 4, 5, 6
is to act as the 3-cycle (4, 6, 5). It is now easy to see, either by explicitly
computing, or by using the same intuition as we did above for why disjoint
cycles commute, that s =
_
1 2 3 4 5 6
3 2 1 6 4 5
_
= (4, 6, 5)(2)(1, 3). (Since (2) is just
the identity permutation, it is typically omitted, and this product is written
as (4, 6, 5)(1, 3).
Notice that since disjoint cycles commute, (4, 6, 5)(1, 3) is the same as
(1, 3)(4, 6, 5). Notice, too, that had we started with, for instance, 6 and
followed it around, and then picked 3 and followed it around, we have found
s = (3, 1)(6, 5, 4). Any other decomposition of s into disjoint cycles must
be related to the rst decomposition s = (4, 6, 5)(1, 3) in a similar manner
as these two above: either the cycles could have been swapped, or internally,
a cycle could have been rearranged cyclically (such as (6, 5, 4) instead of
(4, 6, 5)). This is because, the product of disjoint cycles simply follows,
one by one, the various elements of 1 2 3 4 5 6 under repeated action of s,
and no matter in which manner the cycles are written, the repeated action
4.1. GROUPS: DEFINITION AND EXAMPLES 165
of s must be the same.
These same ideas apply to arbitrary permutations, and we have the
following (whose proof we omit because it is somewhat tedious to write in
full generality):
Proposition 4.3.1. Every permutation in S
n
factors into a product of dis-
joint cycles. Two factorizations can only dier in the order in which the
cycles appear, or, within any one cycle, by an internal cyclic rearrangement.
Corollary 4.3.1. Every permutation in S
n
can be written as a product of
transpositions.
Proof. This is just a combination of Proposition 4.3.1 and Exercise 4.3.6
above, which establishes that every cycle can be written as a product of
transpositions. 2
Remark 4.3.1. Unlike the factorization of a permutation into disjoint cycles,
there is no uniqueness to the factorization into transpositions. (For instance,
in addition to the factorization (1, 3)(1, 2) = (1, 2, 3) we had before, we also
nd (1, 2)(3, 2) = (1, 2, 3).) But something a little weaker than uniqueness
holds even for factorizations into transpositions: if a permutation s has two
factorizations s = d
1
d
2
d
l
and s = e
1
e
2
e
m
where the d
i
and e
j
are transpositions, then either l and m will both be even or both
be odd! (The proof is slightly complicated, and we will omit it since this
is an introduction to the subject.) This allows us to dene unambiguously
the parity of a permutation: we call a permutation even if the number of
transpositions that appear in any factorization into transpositions is even,
and likewise, we call it odd if this number is odd.
4.1.2 Dihedral groups
Example 4.4. Consider a piece of cardboard in the shape of an equilat-
eral triangle. Now consider all operations we can perform on the piece of
166 CHAPTER 4. GROUPS
cardboard that do not shrink, stretch, or in anyway distort the triangle, but
are such that after we perform the operation, nobody can tell that we did
anything to the triangle! To help determine what such operations could be,
pretend that the piece of cardboard has been placed at a xed location on a
table, and the location has been marked by lines drawn under the edges of
the cardboard. Also, label the points on the table that lie directly under the
vertices of the triangle as a, b, and c respectively. After we have done our
(yet to be determined!) operation on the cardboard, the triangle should stay
at the same locationotherwise it would be obvious that somebody has done
something to the piece of cardboard. This means that after our operation,
each vertex of the triangle must somehow end up once again on top of one
of the three points a, b, and c marked on the table.
b
c
a
We will refer to our operations as symmetries of the equilateral triangle.
We will also refer to each operation as a rigid motion, because, by not
distorting the cardboard, it preserves its rigidity. Observe that since we are
not allowed to distort the triangle, once we know where the vertices have
gone to under our operation, we would immediately know where every other
point on the triangle would have gone to. For, if a point P is at a distance
x from a vertex A, a distance y from a vertex B and a distance z from
the third vertex C, then the image of P must be at a distance x from the
image of A, a distance y from the image of B and a distance z from the
image of C, and this xes the location of the image of P. (Actually, more
is true: it is sucient to know where any two vertices have gone to under
our operation to know where every point has gone: see Remark 4.5.1 ahead
if you are interested. But of course, if you know where two vertices have
gone, then you automatically know where the third vertex has gone.) Hence,
4.1. GROUPS: DEFINITION AND EXAMPLES 167
it is enough to study the possible rearrangements, or permutations, of the
vertices of the triangle to determine our operations. A key sticking point
is that while every symmetry of the triangle corresponds to a permutation
of the vertices, it is conceivable that not every permutation of the vertices
comes from a symmetry. As it turns out, this is not the case, as we will see
below.
Let us, for example, write
_
a b c
b c a
_
for the permutation of the vertices that
takes whichever vertex that was on the point on the table marked a and
moves it to the point marked b, whichever vertex that was on the point
on the table marked b and moves it to the point marked c, and whichever
vertex that was on the point on the table marked c and moves it to the
point marked a. Notice that since there are three vertices, there are only six
permutations to consider. With this notation, let us consider each of the six
permutations in turn, and show that they can be realized as a symmetry of
the triangle:
1. id =
_
a b c
a b c
_
. This of course corresponds to doing nothing to the tri-
angle. This is a valid operation of the sort that we are seeking: it
is clearly a rigid motion of the triangle (there is no distortion of the
cardboard), and after we have performed this operation, we would not
be able to tell whether anybody has disturbed the triangle or not!
2. =
_
a b c
b c a
_
. This can be realized by rotating the triangle counter-
clockwise by 120
.
Notice that if were to form the composition , we would arrive at
this permutation, and it is for this reason that we have denoted this
permutation by
2
.
168 CHAPTER 4. GROUPS
4.
a
=
_
a b c
a c b
_
. This can be realized by ipping the triangle about the
line joining the point a and the midpoint of the opposite side bc. This
too is a rigid motion, and after the ip is over, we would not be able
to tell if the cardboard has been moved.
5.
b
=
_
a b c
c b a
_
. This can be realized by ipping the triangle about the
line joining the point b and the midpoint of the opposite side ac. Like
a
, this too is a rigid motion, and after the ip is over, we would not
be able to tell if the cardboard has been moved.
6.
c
=
_
a b c
b a c
_
. This is just like
a
and
b
, and can be realized by ipping
the triangle about the line joining the point c and the midpoint of the
opposite side ab.
Thus, we have obtained all six permutations as symmetries of the trian-
gle! Notice that these six symmetries compose as follows:
id
2
a
b
c
id id
2
a
b
c
2
id
c
a
b
2
id
b
c
a
a
a
b
c
id
2
b
b
c
a
2
id
c
c
a
b
2
id
Notice that we get a group: the composition of any two symmetries is a
symmetry, composition is associative since this is always true for composition
of functions, the element id acts as the identity, and it is clear from the
relations
2
=
2
= id,
a
a
=
b
b
=
c
c
= id that every element has an
inverse. This group is called the dihedral group of index 3 and is denoted
D
3
. (Notice the similarity between this group and the group S
3
of Example
4.2. We will take this up again when we consider isomorphisms later in this
chapter.)
4.1. GROUPS: DEFINITION AND EXAMPLES 169
See Example 4.89 in the notes at the end of the chapter, where D
3
is
interpreted as the group of symmetries of the equilateral triangle with the
rigid structure.
Example 4.5. This example is similar in spirit to the previous example.
We consider a piece of cardboard in the shape of a square. We wish to
determine all operations we can perform on the piece of cardboard that
do not shrink, stretch, or in anyway distort the square, but are such that
after we perform the operation, nobody can tell that we did anything to the
square! To help determine what such operations could be, pretend that the
piece of cardboard has been placed at a xed location on a table, and the
location has been marked by lines drawn under the edges of the cardboard.
Also, label the points on the table that lie directly under the vertices of
the square as a, b, c, and d respectively. After we have done our (yet to be
determined!) operation on the cardboard, the square should stay at the same
locationotherwise it would be obvious that somebody has done something
to the piece of cardboard.
b
c
a
d
We will refer to our operations as symmetries of the square and will refer
to each operation as a rigid motion. Just as in the previous example, each
vertex of the square must somehow end up once again on top of one of the
four points a, b, c, and d marked on the table after the application of a
symmetry. As before, the preservation of the rigidity of the square ensures
that once we know where the vertices have gone to under the application
of a symmetry, we would immediately know where every other point on
the square would have gone to. (In fact, it is enough to know where two
170 CHAPTER 4. GROUPS
adjacent vertices have gonesee Remark 4.5.1 ahead.) Hence, it is enough
to study the possible permutations of the vertices of the square to determine
its symmetries. Unlike the previous example, however, it is not true that
every permutation of the vertices comes from a symmetry.
As before, we write, for example,
_
a b c d
b c d a
_
for the permutation of the
vertices that takes whichever vertex that was on the point on the table
marked a and moves it to the point marked b, the vertex on b to c, the
vertex on c to d, and the vertex on d to a. Notice that since there are
four vertices, there are 4! = 24 permutations to consider (see Exercise 4.3.3
above). With this notation, let us see which of these 24 permutations can
be realized as a symmetry of the square:
1. id =
_
a b c d
a b c d
_
. This of course corresponds to doing nothing to the
square. As with the operation of the previous example that does noth-
ing on the equilateral triangle, this operation on the square is a rigid
motion of the square, and after we have performed this operation, we
would not be able to tell whether anybody has disturbed the square
or not.
2. =
_
a b c d
b c d a
_
. This can be eected by rotating the square counter-
clockwise by 90
(or clockwise by 90
3
for
example.
Exercise 4.5.1. Create a table that shows how these symmetries
compose and argue, as in Example 4.4 above, why this table shows
that the set of symmetries forms a group.
This group is called the dihedral group of index 4, and is denoted D
4
.
Exercise 4.5.2. Use the group table of D
4
to show that every
element of D
4
can be expressed as
i
j
H
for uniquely determined
integers i 0, 1, 2, 3 and j 0, 1.
Denition 4.5.1. The center of a group is dened to be the set
of all elements in the group that commute with all other elements
in the group. (For instance, the identity element is always in the
center of a group as it commutes with all other elements.)
Exercise 4.5.3. Determine the elements in D
4
that lie in its center.
4.1.3 Cyclic groups
Example 4.6. Notice that the subset 1, 1 of Z endowed with the usual
multiplication operation of the integers is a group!
Question 4.6.1. What similarities do you see between this group
and the group (Z/2Z, +)?
4.1. GROUPS: DEFINITION AND EXAMPLES 173
Question 4.6.2. Let G be any group that has exactly two ele-
ments. Can you see that G must be similar to the group (Z/2Z, +)
in exactly the same way that this group 1, 1 is similar to
(Z/2Z, +)? Now that you have seen the notion of isomorphism
in the context of rings and vector spaces, can you formulate pre-
cisely how any group with exactly two elements must be similar to
(Z/2Z, +)?
We will now generalize Example 4.6.
Example 4.7. Let n 3 be any integer. Recall that the complex num-
ber z
n
= cos(2/n) + sin(2/n) has modulus 1, and is at an angle
n
=
2/n with respect to the positive real axis. DeMoivres theorem ((cos() +
sin())
k
= cos(k) + sin(k) for k = 1, 2, . . . ) shows that the complex
number z
2
n
= cos(4/n) + sin(4/n). This also has modulus 1, but is now
at an angle 2
n
= 4/n. Proceeding, we nd that the complex numbers
1, z
n
, z
2
n
, z
3
n
, . . . , z
n1
n
are evenly spaced around the unit circle, and z
n
n
gives
you back the complex number 1.
The elements z
i
6
are shown below:
1
b
a
d
c
caption
b
a
c
caption
1 = z
6
z
z
2
z
3
z
4
z
5
Cyclic group of order 6
It is easy to see that the set C
n
= 1, z
n
, z
2
n
, . . . , z
n1
n
is a group; it is
known as the cyclic group of order n. (See Lemma 4.29.1 to note that C
n
=
z
n
, with notation as in that lemma. Thus, in the language of Denition
174 CHAPTER 4. GROUPS
4.29.1 ahead, C
n
is the group generated by z
n
. See also Remark 4.32 ahead
as well as Exercise 4.72.1.)
Question 4.7.1. If z
i
n
z
j
n
= z
k
n
for some k with 0 k < n, what
is k in terms of i and j?
Question 4.7.2. If (z
i
n
)
1
= z
j
n
for some j with 0 j < n, what
is j in terms of i?
Question 4.7.3. Consider the group (Z/nZ, +), for a xed integer
n 1. Notice that every element in this group is obtained by adding
[1]
n
to itself various number of times. For instance, [2]
n
= [1]
n
+[1]
n
(which we write as 2 [1]
n
), [3]
n
= [1]
n
+[1]
n
+[1]
n
(which we write
as 3 [1]
n
), etc. What similarities do you see between (Z/nZ, +)
and the group C
n
above? Now that you have seen the notion of
isomorphism in the context of rings and of vector spaces, can you
formulate precisely how (Z/nZ, +) and C
n
are similar?
4.1.4 Direct product of groups
Example 4.8. Let G and H be groups. We endow the cartesian product G
H with the operation (g
1
, h
1
)(g
2
, h
2
) = (g
1
g
2
, h
1
h
2
) (compare with Example
2.22 in Chapter 2). Here, the product g
1
g
2
refers to the operation in G,
while the product h
1
h
2
refers to the operation in H.
Exercise 4.8.1. Verify that with this denition of operation, the
set GH forms a group.
This is known as the direct product of G and H.
Question 4.8.1. What is the identity element in GH? What is
the inverse of an element (g, h)?
Question 4.8.2. If G and H are abelian, must GH necessarily
be abelian? If G H is not abelian, can G or H be abelian? Can
both G and H be abelian?
Exercise 4.8.2. Consider the direct product (Z/2Z, +)
(Z/3Z, +). Show by direct computation that every element of this
group is a multiple of the element ([1]
2
, [1]
3
). What similarities do
you see between this group and (Z/6Z, +)? With your experience
with isomorphisms in the context of rings and vector spaces, can
you formulate precisely how (Z/2Z, +) (Z/3Z, +) and (Z/6Z, +)
are similar?
4.1. GROUPS: DEFINITION AND EXAMPLES 175
4.1.5 Matrix groups
Example 4.9. We know (see Exercise 2.115 in Chapter 2) that the set
of invertible elements of a ring R, denoted R
r
1
r
1
, f
3
r
1
, f
2
r
2
r
2
, f
2
r
2
, f
3
f
1
1, f
1
1, f
1
f
2
f
2
, r
2
f
2
, r
1
f
3
f
3
, r
1
f
3
, r
2
Notice that every coset (left or right) has exactly two elements, which is
the same number as the number of elements in the subgroup f
1
that we
are considering. This will be useful in understanding the proof of Lagranges
theorem (Theorem 4.39) below.
Exercise 4.35. Take G = S
3
and take H = r
1
. Write down all
left cosets of H and all right cosets of H with respect to all the
elements of G. What observation do you make?
The following equivalence relation in Lemma 4.36 below is analogous to
the corresponding equivalence relations for rings (see page 57) and vector
spaces (see page 130), except that once again, we need to distinguish two
cases because the group operation need not be commutative. Note that in
4.2. SUBGROUPS, COSETS, LAGRANGES THEOREM 189
the case of rings, for example, we dene a b if and only if ab I (where
I is some given ideal). Now note that a b is really a + (b). Thus, in
the group situation, the expression analogous to a + (b) would be ab
1
,
and this is indeed the expression we consider in the lemma below. (And
while a +(b) = (b) +a in the situation of rings, the operation in a group
need not be commutative, so we need to consider the expression analogous
to (b) +a as well, which is b
1
a.)
Lemma 4.36. Let G be a group and H a subgroup. Dene two relations
on G, denoted
L
and
R
, by the following rules: a
L
b if and only
if b
1
a H, and a
R
b if and only if ab
1
H. Then
L
and
R
are
both equivalence relations on G. The equivalence class [a]
L
of an element
a with respect to the relation
L
is the left coset aH, while its equivalence
class [a]
R
with respect the relation
R
is the right coset Ha
Proof. The proof that
L
is an equivalence relation is similar to the proof
of Lemma 2.78 in Chapter 2, except that we have to account for the fact
that the group operation need not be commutative.
To show that a
L
a, simply note that a
1
a = 1 H. To show that
a
L
b implies that b
L
a, note that a
L
b gives (by denition) b
1
a = h
for some h H, and taking inverses of both sides (see Exercise 4.19 above),
we nd (b
1
a)
1
= a
1
b = h
1
. Since h
1
is also in H as H is a subgroup,
we nd a
1
b is in H, which shows that b
L
a. Finally, given a
L
b and
b
L
c, note that (by denition) b
1
a = h
1
and c
1
b = h
2
for some h
1
and
h
2
in H. Then h
2
h
1
= c
1
b b
1
a = c
1
a, and since h
2
h
1
is also in H (as
H is a subgroup), we nd a
L
c as well.
The proof that
R
is an equivalence relation is similar.
To prove that [a]
L
= aH, note that any element b in aH is of the form ah
for some h H. Multiplying by a
1
, we nd a
1
b = h and hence a
1
b H.
This shows that b
L
a. Thus, all elements in aH are in the equivalence
class of a, i.e., aH [a]
L
. For the other direction, take any b [a]
L
. Then
b
L
a, so (by denition) a
1
b = h for some h H. Thus, multiplying both
190 CHAPTER 4. GROUPS
sides by a, we nd b = ah, so b aH. Hence, [a]
L
aH as well.
The proof that [a]
R
= Ha is similar.
2
Note that we immediately have:
Corollary 4.37. Any two left cosets aH and bH are either equal or disjoint.
Similarly any two right cosets Ha and Hb are either equal or disjoint.
Proof. This follows from the fact that aH = [a]
L
and bH = [b]
L
, and the
fact that any two equivalence classes arising from an equivalence relation
are either equal or disjoint. (The proof is identical for right cosets.) 2
Here is a quick exercise:
Exercise 4.38. Show that H is itself a left coset, as well as a
right coset. Now show that the left coset aH equals H if and only
if a H. Similarly, show that the right coset Hb equals H if and
only if b H. More generally, show that left coset aH equals the
left coset bH if and only if a bH if and only if b aH (and
similarly for right cosets).
4.2.3 Lagranges theorem
Theorem 4.39. (Lagranges Theorem) Let G be a group of nite order, and
let H be a subgroup. Then the order of H divides the order of G.
Proof. The crux of the proof is to show that any two left cosets of H have the
same number of elements (recall that we have already seen this phenomenon
in Example 4.34 above: see the table of left and right cosets in that example).
Once we have shown this, it will follow that every coset has o(H) elements
in it, since H itself is one of these left cosets (it is the left coset hH for any
h H, for instance, see Exercise 4.38). From this it is trivial to conclude
that o(H) must divide o(G): since the left cosets are disjoint and their union
is G, and since each left coset has o(H) elements, we would nd that o(H)
4.2. SUBGROUPS, COSETS, LAGRANGES THEOREM 191
times the number of distinct left cosets of H must equal o(G), i.e., o(H)
must divide o(G).
To prove that any two left cosets of H have the same number of elements,
take two left cosets aH and bH. Every element of aH can be written as ah
for some unique h H. For, by denition, every element of aH is already
of the form ah for some h H. We only have to show that h is unique. But
this is clear: if ah = ah
H
and bH = b
H. Viewing aH as a
H and bH as b
Hb
H = (a
)H whenever aH = a
H and bH = b
and b
L
b
).
In general, this need not happen. For instance, take G = S
3
, and take
H = f
1
. Consider the cosets r
1
f
1
= r
1
, f
3
and r
2
f
1
= r
2
, f
2
(see the table in Example 4.34). Now, it is clear from the table that
r
1
f
1
= f
3
f
1
and that r
2
f
1
= f
2
f
1
. So, the question is: is (r
1
r
2
)f
1
=
(f
3
f
2
)f
1
? The answer is no! We nd that (r
1
r
2
)f
1
= 1f
1
= 1, f
1
,
while (f
3
f
2
)f
1
= r
2
f
1
= r
2
, f
2
.
So how should one x this problem? Let us rst analyze the situation
some more. Since a
= a
1 a
H and since a
H = aH, we nd a
aH,
so a
)H, then a
= ahbk = abb
1
hbk = abjk, and of course, jk H as both j and k are
in H. It would follow that if this miracle were to happen, then a
would
look like ab times an element of H, and therefore, abH would equal a
H.
As the example of G = S
3
and H = f
1
above shows, this miracle will
not always happen, but there are some special situations where this will
happen, and we give this a name:
Denition 4.43. Let G be a group. A subgroup H of G is called a normal
subgroup if for any g G, g
1
hg H for all h H.
Remark 4.44. Alternatively, write g
1
Hg for the set g
1
hg [ h H. Then
we may rewrite the denition above as follows: H is said to be normal if
g
1
Hg H for all g G. Note that this is equivalent to requiring that
gHG
1
H for all g G. For, setting y to be g
1
, note that as g ranges
through all the elements of G, y = g
1
ranges through all the elements of G
as well.
Example 4.45. Take G = S
3
again, but this time around, take H = r
1
=
1, r
1
, r
2
. Let us consider the sets g
1
Hg as g ranges through S
3
. We can
obtain the various products by using the group table for S
3
on page 160,
for instance, f
1
1, r
1
, r
2
f
1
1
= f
1
1, r
1
, r
2
f
1
= f
1
f
1
, f
1
r
1
f
1
, f
1
r
2
f
1
=
1, r2, r
1
, etc. Doing so, we obtain the following:
g g1, r
1
, r
2
g
1
id 1, r
1
, r
2
r
1
1, r
1
, r
2
r
2
1, r
1
, r
2
f
1
1, r
2
, r
1
f
2
1, r
2
, r
1
f
3
1, r
2
, r
1
a
2
+b
2
.) The symmetries of this set would be
one-to-one onto linear transformations of R
2
that in addition preserve the length
of a vector. (See Exercise 4.78 at the end of the chapter.)
Example 4.93. This example and the next are central in Galois theory, and you
may wish to postpone them for a future reading. The set could be the eld Q[
2],
and the structure could be that (i) Q[
2]
satises a family of polynomials over the rationals.
4.5. FURTHER EXERCISES 209
The symmetries of Q[
2] Q[
2], f(a +b) = f(a) +f(b), f(ab) = f(a)f(b) and f(1) = 1: in other words, f
must also be a ring homomorphism. (Note that once f satises the property that
it is a ring homomorphism, the relations ab = 1 will mean that f(a)f(b) = 1, so
pairs of multiplicative inverses will go to pairs of multiplicative inverses under f.
Thus, the essential character of Q[
2] to Q[
2].
As for the second feature, we say that a one-to-one onto map f : Q[
2] Q[
2]
preserves the minimal polynomial over the rationals if any a Q[
2] that preserves
the eld structure and the minimal polynomial over the rationals must be a ring
isomorphism that act as the identity map on the rationals. Moreover, it is easy to see
that any ring isomorphism from Q[
2] to Q[
2] to Q[
2]
which act as the identity on Q. But by Exercise 2.137 in Chapter 2, any ring
homomorphism from Q to Q[
2] that preserve both the eld structure and the minimal polynomial of
elements over the rationals are precisely the set of ring isomorphisms from Q[
2]
to itself.
Exercise 4.93.1. Using the ideas developed in these remarks and using
Exercise 2.109.1 in Chapter 2, prove that the only non-trivial symmetry of
Q[
2] with the structure above is the familiar conjugation map that sends
each a+b
2,
2,
2,
3] satises a family of
polynomials over the eld Q[
2].
The same considerations as in Example 4.93 above apply: The symmetries of
Q[
2,
3] that preserve the eld structure are precisely the set of ring isomor-
phisms from Q[
2,
2,
2].
(Note that unlike the previous example, it is not true that every ring isomorphism
from Q[
2,
2].)
Exercise 4.94.1. Using the ideas developed in these remarks and using
Exercise 2.138 in Chapter 2, prove that the only non-trivial symmetry of
Q[
2,
2 +
c
3 + d
6 to a + b
2 c
3 d
2,
2].
Remarks on Exercise 4.3.7 Here is how you may start this exercise: Let j
be an integer in the set
n
. Let us consider the case where j is one of the bs, say
j = b
k
, for some k. Then t(j) = b
k+1
, where the subscript is taken modulo e so as
to lie in the set 1, 2, . . . , n. Hence st(j) would be s(b
k+1
). Now, because s and t
are disjoint cycles, b
k+1
will not appear among the as, and hence, s(b
k+1
) would
equal b
k+1
. Now work out ts(j) for this particular case. Next consider the case
where j is not one of the bs and work out the details.
4.5. FURTHER EXERCISES 211
Remarks on structure-preserving maps forming a group It is not
always true that the inverse of a structure preserving map also preserves the struc-
ture. Typically this is so, but occasionally this is not the case. It is for this reason
that we only consider symmetries of a set whose inverse also preserves the given
structure when viewing the symmetries as a group. Here is an example:
We consider the real numbers with its dierentiable structure. What this
means is that there exists a notion of dierentiability of functions R R. A
symmetry of R with its dierential structure would be a one-to-one onto function
f : R R that preserves this dierentiable structure. This means that f should
satisfy the condition that for any dierentiable map g : R R, the composite g f
must also be dierentiable. A necessary and sucient condition for this to happen
is that f itself must be dierentiable. It is now easy to nd bijections f : R R
that are dierentiable, but whose inverse is not dierentiable. One example is the
function f(x) = x
3
. It is dierentiable at all values of x, but its inverse function
f
1
(x) = x
1/3
fails to be dierentiable at x = 0.
Remarks on orthogonal groups Orthogonal groups come in more guises
than the one we have described in Example 4.13. Recall the origins of the n = 2 case
over R that we exhibited in Exercise 4.78: the group O
2
(R) is the set of symmetries
of R
2
with the structure that it is a vector space over the reals, and that every
vector has a length (Example 4.92). Now let us examine length more closely.
The length of a vector pi +qj is dened to be
_
p
2
+q
2
. Temporarily ignoring the
square root (we will put it back later), the squared-length of a general vector xi +yj
is thus given by x
2
+y
2
. This is an example of a quadratic forma polynomial
all of whose monomials are of degree 2 inin two variables. Now note that the
polynomial x
2
+y
2
can be written as
(x, y)
_
1 0
0 1
_
(x, y)
t
(Here, recall from elementary linear algebra that the product of a row vector (s, t)
and a column vector (p, q)
t
is given by sp+tq. Thus, since
_
1 0
0 1
_
(x, y)
t
is just
(x, y)
t
, the product above becomes (x, y)(x, y)
t
= x
2
+y
2
, as claimed.)
Mathematicians have found it useful to dene length dierently as well (we will
see a famous example of this ahead). More generally, let q = ax
2
+ 2bxy + cy
2
be
212 CHAPTER 4. GROUPS
any quadratic form with coecients a, b, c from the reals. (It is convenient to write
the coecient of xy as 2b.) Then, q may be written as
(x, y)
_
a b
b c
_
(x, y)
t
(Check this! Notice how the fact that we wrote the coecient of xy as 2b allows
us to write the (1, 2) and (2, 1) entries of the matrix above as b. Had we taken
the coecient of xy as b, then these entries would have had to be b/2.) Using this
quadratic form, we dene the q-length of a vector pi+qj as
_
ap
2
+ 2bpq +cq
2
. (The
length may well turn out to be imaginarythe quantity under the square root sign
may be negativebut that only makes matters more interesting!) Moreover, we
dene the q-dot product of two vectors si +tj and pi +qj to be asp+b(sq +tp)+ctq.
Writing M
q
for the matrix
_
a b
b c
_
above, we nd that the q-length of pi +qj is
given by
(p, q)
_
a b
b c
_
(p, q)
t
,
and the q-dot-product of si +tj and pi +qj is given by
(s, t)
_
a b
b c
_
(p, q)
t
,
The matrix M
q
allows us to compute q-lengths and q-dot products; notice that
M
q
is a symmetric matrix (the (1, 2) entry and (2, 1) entry are equal).
Given an arbitrary quadratic form q in two variables with coecients in the
reals, we may now consider the symmetries of R
2
with q-structure: this is the
structure that R
2
is a vector space over the reals, and that every vector has a q-
length. The symmetries then would be one-to-one onto linear transformations of
R
2
that in addition preserve q-length. These symmetries form a group that we will
denote O
2
(R, q). It is called the orthogonal group of q over R.
Exercise 4.95. Prove that O
2
(R, q) consists of those 2 2 matrices A
with entries in R satisfying A
t
M
q
A = M
q
.
Exercise 4.96. Given a one-to-one onto linear transform T, let us say
that it satises Property (1) if the q-length of T(v) is the same as the
q-length of v for all v in R
2
. Let us say that it satises Property (2) if
the q-dot product of T(v) and T(w) is the same as the q-dot product of
v and w, for all v and w in R
2
. Show that T satises Property (1) if and
only if it satises Property (2).
4.5. FURTHER EXERCISES 213
More generally, an arbitrary quadratic form q in n variables x
1
, x
2
, . . . , x
n
over a eld F is a polynomial in these variables with coecients in F, all of whose
monomials are of degree 2. As long as 2 ,= 0 in this eld (so we rule out elds
like Z/2Z), we may form a symmetric nn matrix M
q
as above, where the entries
in the slots (i, j) and (j, i) both equal half the coecient of x
i
x
j
in the quadratic
form q. (We have to impose the 2 ,= 0 condition, because otherwise, we would not
be able to divide by 2!) The set of nn matrices A satisfying A
t
M
q
A = M
q
forms
a group O
n
(F, q), referred to as the orthogonal group of q over F.
Perhaps the most famous example of the length of vectors in R
n
being measured
by quadratic forms other than x
2
1
+ x
2
2
+ + x
2
n
is given by Einsteins theory
of relativity. There, space-time is considered as a four dimensional space, and
the length of the vector (t, x, y, z)
t
, where t is the time coordinate and x, y, and
z are the usual spatial coordinates, is given by
_
t
2
x
2
y
2
z
2
. (Actually,
this is a drastic simplication: space-time is not really a vector space but a four-
dimensional manifold, and the length formula above applies on the tangent spaces
which are actual vector spacesbut that is too mathematically advanced for now.)
This quadratic form t
2
x
2
y
2
z
2
has associated symmetric matrix
_
_
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
_
_
_
_
_
_
The associated orthogonal group is called the Lorentz group.
214 CHAPTER 4. GROUPS
Appendix A
Sets, Functions, and
Relations
We review here some basic notions that you would have seen in an earlier course
on proofs or on discrete mathematics.
A set is simply a collection of objects. We are of course being informal here:
there are more formal denitions of sets that are based on various axioms designed
to avoid paradoxes, but we will not go into such depths in this appendix. If A
is a set, the objects whose collection make up the set A are also referred to as
the elements of A. You will be familiar with both notations for sets: the explicit
notation, such as A = 2, 3, 5, 7 as well as the implicit or set-builder notation,
such as A = n [n is a prime integer between 2 and 10. You will also be familiar
with the notation for element of.
If A and B are two sets, we say A is a subset of B (written A B) if x A
implies x B. If A B and B A, we say A = B. If A B but A ,= B, we say
that A is a proper subset of B, and we write A B.
The union of two sets A and B, denoted A B, is simply the set x [ x
A or x B. The intersection of two sets A and B, denoted A B is the set
x [ x A and x B. The dierence of two sets A and B, denoted AB, is the
set x [ x A and x , B. (Note that in general, AB ,= B A.)
A function f from A to B (written f : A B) is a rule that assigns to each
element of A a unique element of B. A function f : A B is called injective or
215
216 APPENDIX A. SETS, FUNCTIONS, AND RELATIONS
one-to-one if f(a
1
) = f(a
2
) for a
1
, a
2
A implies that a
1
= a
2
(or alternatively, if
a
1
,= a
2
, then f(a
1
) ,= f(a
2
)). A function f : A B is called surjective or onto if
for each b B, there exists a A such that f(a) = b. A function f : A B that
is both injective and surjective is said to be bijective; also, f is said to provide a
bijection or a one-to-one correspondence between A and B.
Example A.1. Consider the following functions from the integers to itself:
1. f(n) = 2n.
2. g(n) =
_
_
_
n, if n is odd
n/2, if n is even
3. h(n) = n
2
+ 1.
4. b(n) = n + 1.
Then f is injective but not surjective, g is surjective but not injective, h is
neither injective nor surjective, and b is bijective.
The Cartesian Product of two sets A and B, denoted A B, is simply the set
of all ordered pairs (a, b) [ a A, b B. A relation on a set A is simply a subset
of AA. Let R be a relation on a set A. If (a, b) R, we say a is related to b and
we often write a R b to indicate that a is related to b under the relation R. The
relation R is said to be reexive if for each a A, a R a. R is said to be symmetric
if whenever a R b, then b R a as well. Finally, R is said to transitive if whenever
a R b and b R c, then a R c as well.
A relation R on a set A that is reexive, symmetric, and transitive is called
an equivalence relation on A. For any a A, let us write [a] for the set of all
elements of B that are related to a, that is, [a] = b [ a R b. The set [a] is called
the equivalence class of a. We have the following: if R is an equivalence relation
on A, then for any two elements a and b in A, either [a] = [b] or else, [a] and [b]
are disjoint. In particular, this means that the equivalence classes divide A into
disjoint sets of the form [a], whose union is all of A.
The symbol is often used instead of R to denote a relation on a set.
Example A.2. The easiest and most central example perhaps of an equivalence
relation on a set is the relation on Z dened by saying that m is related to n
(or m n) i m n is even. Convince yourself that this relation is indeed an
217
equivalence relation, and that there are precisely two equivalence classes: the class
[0] and the class [1].
A binary operation on a set A is simply a function f : A A A. As we
have seen, the usual operations of addition and multiplication in, for example, the
integers, are just binary operations on Z, that is, functions Z Z Z.
Question A.3. Is division a binary operation on the rationals? How
about on the set Q0?
A set A is said to be countable if there exists a one-to-one correspondence
between A and some subset of N. If no such correspondence exists, then A is said
to be uncountable. If there exists a one-to-one correspondence between A and the
subset 1, 2, . . . , n of Z (for some n), then A is said to be nite. If no such n Z
exists for which there is a one-to-one correspondence between A and 1, 2, . . . , n,
then A is said to be innite. Note that an innite set can be either countable or
uncountable.
Example A.4. Any set with a nite number of elements is countable, by denition
of niteness and countability.
Example A.5. Any subset of a countable set is also countable.
Example A.6. The integers are countable. One one-to-one correspondence be-
tween Z and N is the one that sends a to 2a if a 0, and a to 2(a) 1 if
a < 0.
Example A.7. The Cartesian product of two countable sets is countable. Here
is a sketch of a proof when both A and B are innite. There exists a one-to-
one correspondence between A and N (why?), and turn, there exists a one-to-one
correspondence between N and the set 2
n
n N. Composing, we get a one-to-one
correspondence f between A and the set 2
n
n N. Similarly, we have a one-to-
one correspondence g between B and the set the set 3
n
n N. Now dene the
map h : AB N by h(a, b) = f(a)g(b), and show that this is a bijection.
Example A.8. The rationals Q are countable. This is because we may view
Q Z Z by identifying the rational number a/b, written in lowest terms, with
the ordered pair (a, b). By Example A.7 above, Z Z is countable, and hence by
Example A.5, Q is also countable.
Example A.9. The reals R are uncountable. The proof of this is the famous
Cantor diagaonalization argument.
218 APPENDIX A. SETS, FUNCTIONS, AND RELATIONS
Appendix B
Partially Ordered Sets, and
Zorns Lemma
A Partial Order on a set S is a relation on S that is reexive, antisymmetric
(i.e., a b and b a imply that a = b), and transitive. Here are two examples:
Example B.1. Dene a relation on the positive integers by the rule m n
if and only if m divides n. Since m[m for all positive integers, is reexive. Since
m[n and n[m imply m = n (recall that we are only allowing positive integers in our
set), our relation is indeed antisymmetric. Finally, if m[n and n[q, then indeed
m[q, so is transitive.
Example B.2. Let S be a nonempty set, and write T for the set of all proper
subsets of S. Dene a relation on T by dening X Y if and only if X Y .
You should be able to verify easily that is a partial order on T.
This partial order could also have been dened on the set of all subsets of S,
we chose to dene it only on the set of proper subsets to make the situation more
interesting (see Example B.4 ahead, for instance)!
Given a partial order on a set, two elements x and y are said to be com-
parable if either x y or y x. If neither x y or y x, then x and y are
said to be incomparable. For instance, in Example B.1, 2 and 3 are incomparable,
since neither 2[3 nor 3[2. Similarly, in the set of all proper subsets of, say, the set
1, 2, 3, the subsets 1, 2 and 1, 3 are incomparable, since neither of these sets
is a subset of the other.
219
220 APPENDIX B. PARTIALLY ORDERED SETS, ZORNS LEMMA
Given a partial order on a set S, and given a subset A of S, an upper bound
of A is an element z S such that x z for all x A.
Example B.3. In Example B.1, if we take A to be the set 1, 2, 3, 4, 5, 6, then
lcm(1, 2, 3, 4, 5, 6) = 60 is an upper bound for A.
Note that not all subsets of S need have an upper bound. For instance, if we
take B in this same example to be the set of all powers of 2, then there is no integer
divisible by 2
m
for all values of m, so B will not have an upper bound.
Given a partial order on a set S, a maximal element in S is an element x
such that for any other element y, either y x or else x and y are incomparable.
Example B.4. In Example B.2, suppose we took S = 1, 2, 3, so
T = , 1, 2, 3, 1, 2, 1, 3, 2, 3.
Then 1, 2 is maximal: each of 1, 2,, and 2 are 1, 2, while 1, 3and
2, 3 cannot be compared with 1, 2.
Of course, these same arguments show that 1, 3and 2, 3 are also maximal
elements.
Note that instead, we had taken T to be the set of all subsets of 1, 2, 3, then
there would only have been one maximal element, namely 1, 2, 3, , and all other
subset X would have satised X 1, 2, 3. Having several maximal elements
incomparable to one another is certainly a more intriguing situation!
A partial order on a set that has the further property that any two elements
are comparable is called a linear order. For example, the usual order relation on R
is a linear order.
Given a partial order on a set S, a chain in S is a nonempty subset A of S
that is linearly ordered with respect to , i.e., for all x and y from A, either x y
or y x.
Example B.5. In Example B.3, note that B is a chain, since every element of B
is a power of 2, and given elements 2
m
and 2
n
in B, if m n then 2
m
[2
n
, else
2
n
[2
m
. On the other hand, A is not a chain: we have already seen that 2 and 3 are
incomparable.
Zorns Lemma, in spite of its name, is really not a lemma, but a universally
accepted axiom of logic. It states the following:
221
Zorns Lemma: Let S be a nonempty set with a partial order . If every
chain in S has an upper bound in S, then S has a maximal element.
Zorns Lemma is equivalent to certain other axioms of logic, most famously, to
the Axiom of Choice. What this means that if one were to accept the statement
of Zorns Lemma as a fundamental axiom of logic, then in conjunction with other
accepted axioms of logic, one can derive the statement of the Axiom of Choice.
Conversely, if one were to accept the Axiom of Choice as a fundamental axiom of
logic, then in conjunction with other accepted axioms of logic, one can derive the
statement of Zorns Lemma.
Here is a typical application of Zorns Lemma. Recall from Exercise 2.135 of
Chapter 2 the denition of maximal ideals.
Theorem B.6. Let R be a ring. Then R contains maximal ideals.
Proof. Let S be the set of all proper ideals of R. Note that S is nonempty, since
the zero ideal 0 is in S. We dene a partial order on S by I J if and only if
I J (see Example B.2 above). Let T be a chain in S. Recall what this means:
T is a collection of proper ideals of R such that if I and J are in the collection,
then either I J or else J I. We claim that T has an upper bound in S, i.e.,
there exists a proper ideal K in R such that I K for all I in our chain T. The
proof of the claim is simple. By the denition of being a chain, T is nonempty, so
T contains at least one ideal of R. We dene K, as a set, to be the union of all
the ideals I in T. We need to show that K is a proper ideal of R. This is easy.
Note that since there is at least one ideal in T, and since this ideal contains 0, K
must be nonempty as it must contain at least the element 0. Now given a and b in
K, note that a must live in some ideal I in T and b must live in some ideal J in
T, since K is, after all, the union of all the ideals in T. Since T is linearly ordered
(this is where the property that chains are linearly ordered comes in), either I J
or else J I. Say I J. Then both a and b are in J. Hence, a+b is also in J as J
is an ideal. Since J in turn is contained in K, we nd a +b K. This shows that
K is closed under addition. Now given any a K, as before, a I for some ideal
I in T. Since I is an ideal, both ar and ra are in I for all r R. Since I K, we
nd ar and ra are in K. By Lemma 2.67 of Chapter 2, we nd K is an ideal. Of
course, K is clearly an upper bound for T, since I K for all I in T by the very
manner in which we have dened K.
222 APPENDIX B. PARTIALLY ORDERED SETS, ZORNS LEMMA
Note that indeed K is a proper ideal of R, i.e., K is in S. For, if not, then
K = R, so in particular, this means that 1 K. Since K is the union of the ideals
in T, we nd 1 I for some ideal I in T. But this is a contradiction, since I is
a proper ideal of R (remember that the set S was dened as the set of all proper
ideals of R, and I is a member of S).
Since T was arbitrary, we have found that every chain in S has an upper bound
in S. By Zorns lemma, S has a maximal element. But a maximal element of S is
precisely a maximal ideal of R!
2
Now we will present the proof that bases exist in all vector spaces, not just in
those with a nite spanning set; this proof invokes Zorns Lemma. Recall that we
can assume that our vector space is nontrivial, thanks to Example 3.35 of Chapter
3.
Theorem B.7. Every vector space has a basis.
Proof. Let S be the set of all linearly independent subsets of V . Since V is not
trivial by assumption, it has at least one nonzero vector, say v, and the set v is
then linearly independent (Exercise 3.22.1). It follows that S is a nonempty set.
Dene a partial order on S by declaring, for any two linearly independent
subsets X and Y , that X Y if and only if X Y . It is easy to check that this is
indeed a partial order: First, given any linearly independent subset X of V , clearly
X X, so indeed X X. Next, if X and Y are two linearly independent subsets
of V and if X Y and Y X, this means that X Y and Y X, so indeed
X = Y . Finally, if X Y and Y Z for three linearly independent subsets X, Y ,
and Z of V , then this means that X Y Z, i.e., X Z, so indeed X Z.
Our strategy will be to rst establish that S has a maximal element with respect
to this partial order, and then to show that this maximal element must be a basis
for V .
Given any chain T in S (recall that this means that T consists of linearly
independent subsets of S with the property that if X and Y are in T, then either
X Y or Y X), we will show that T has an upper bound in S. Write K for
the union of all linearly independent subsets X that are contained in T. We claim
that K is an upper bound for T. Let us rst show that K is a linearly independent
subset of V . By Denition 3.22 of Chapter 3, we need to show that every nite
223
subset of K is linearly independent. Given any nite set of vectors v
1
, . . . , v
n
from
K, note that each v
i
must live in some linearly independent subset X
i
in the chain
T. Since T is a chain, the subsets in T are linearly ordered (this is where we use
the dening property that the elements of a chain are linearly ordered!), we must
have X
i1
X
i2
X
in
for some permutation (i
1
, i
2
, . . . , i
n
) of the integers
(1, 2, . . . n). Thus, all the vectors v
1
, . . . , v
n
belong to X
in
. But since X
i,n
is a
linearly independent set, Denition 3.22 of Chapter 3 implies that the vectors v
1
,
. . . , v
n
must be linearly independent! Since this is true for any nite set of vectors
in K, we nd that K is a linearly independent set. In particular, K is in S.
Now note that given any linearly independent subset X contained in the chain
T, we have X K by the very denition of K, so by denition of the order relation,
X K. This shows that indeed T has an upper bound in S.
By Zorns Lemma, S has a maximal element, call it B. We will show that
B must be a basis of V . Since B is already linearly independent, we only need
to show that B spans V . So let v be any nonzero vector in V : we need to show
that v can be written as a linear combination of elements of B. If v is already in
B, there is nothing to prove (why?). If v is not in B, B v must be linearly
dependent, otherwise, Bv would be a linearly independent subset of V strictly
containing B, violating the maximality of B. Thus, there exists a relation f
0
v +
f
1
b
1
+ f
2
b
2
+ f
k
b
k
= 0 for some scalars f
0
, f
1
, , f
k
(not all zero), and some
vectors b
1
, b
2
, , b
k
of B. Notice that f
0
,= 0, since otherwise, our relation would
read f
1
b
1
+ f
2
b
2
+ f
k
b
k
= 0 (with not all f
i
equal to zero), which is impossible
since the b
i
are in B and B is a linearly independent set. Therefore, we can divide
by f
0
to nd v = (f
1
/f
0
)b
1
+ (f
2
/f
0
)b
2
+ + (f
k
/f
0
)b
k
. Hence v can be
written as a linear combination of elements of B, so B spans V .
Thus, B is a basis of V . 2
Remarks on Proposition 3.37, Chapter 3: Shrinking innite span-
ning sets down to a basis The proof that any spanning set of V can be
shrunk to basis, even when V is innite-dimensional, involves a modication of the
proof of Theorem B.7.
Let us use to denote the given spanning set of V , and as in the proof of
Theorem B.7, let S denote the set of all linearly independent sets of V that are
contained in S. (The italicized condition is where we depart from the proof of
224 APPENDIX B. PARTIALLY ORDERED SETS, ZORNS LEMMA
Theorem B.7.) Note that S is not empty, since is nonempty (recall V is not the
trivial space), and therefore, for any nonzero v , v will be a linearly
independent set, so v will be an element of S.
Now impose the same partial order on S as in the proof of Theorem B.7: X Y
if and only if X Y for two sets X and Y in S. Argue exactly as in that proof
that S must have a maximal element. (Note that if T is a chain in S, then K, the
union of all the sets contained in T, will also be contained in , since every set in T
is contained in .) Let B be a maximal element of S. (Note that by construction
B .) The claim is that B is a basis for V .
To prove this it is of course sucient to prove that B spans V since B is
already linearly independent. For this, we claim that it is sucient to show that
every vector in is expressible as a linear combination of elements of B. For,
assume that we have shown this. Then, given any vector v V , rst write it as
v = f
1
u
1
+ + f
n
u
n
for suitable vectors u
i
and scalars f
i
, invoking the
fact that spans V . Next, since we would have shown that every vector in
is expressible as a linear combination of elements in B, we nd that each u
i
is
expressible as u
i
= f
i,1
b
i,1
+ + f
i,ni
b
i,ni
for some vectors b
i,j
B and scalars
f
i,j
. Substituting these expressions for each u
i
into the expression above for v, we
nd that v is expressible as a linear combination of elements of B, i.e., that B spans
V .
To show that every vector in is expressible as a linear combination of elements
of B, assume that some u is not expressible as a linear combination of elements
of B. Then, exactly as in the proof of Proposition 3.49 (see how we showed C
1
=
C v
t+1
must be linearly independent), we would nd that B u is linearly
independent. But this contradicts the maximality of B! Hence every vector in
must be expressible as a linear combination of elements of B, which means that B
must be a basis. Since B , we have succeeded in shrinking down to a basis.
Remarks on Proposition 3.49, Chapter 3: the general case The
proof of this proposition when V is not assumed to be nite-dimensional involves
just a minor modication of the proof of Theorem B.7. What we need to show is
that there is a maximal linearly independent subset B of V that contains C. Then,
exactly as in the proof of Theorem B.7, this maximal linearly independent set would
be a basis of V , and of course, it would have been chosen so as to contain C. To
show the existence of B, we need to consider the set S of all linearly independent
225
subsets of V that contain C. One would impose a partial order on this set exactly
as in the proof of Theorem B.7. Once again, S, with this partial order, will turn
out to satisfy the extra hypothesis of Zorns Lemma, and will hence have a maximal
element. That maximal element would be our desired maximal linearly independent
subset of V that contains C.
226 APPENDIX B. PARTIALLY ORDERED SETS, ZORNS LEMMA
Appendix C
GNU Free Documentation
License
Version 1.3, 3 November 2008
Copyright 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
<http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license
document, but changing it is not allowed.
Preamble
The purpose of this License is to make a manual, textbook, or other functional
and useful document free in the sense of freedom: to assure everyone the eective
freedom to copy and redistribute it, with or without modifying it, either commer-
cially or noncommercially. Secondarily, this License preserves for the author and
publisher a way to get credit for their work, while not being considered responsible
for modications made by others.
This License is a kind of copyleft, which means that derivative works of the
document must themselves be free in the same sense. It complements the GNU
General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software,
because free software needs free documentation: a free program should come with
227
228 APPENDIX C. GNU FREE DOCUMENTATION LICENSE
manuals providing the same freedoms that the software does. But this License is
not limited to software manuals; it can be used for any textual work, regardless of
subject matter or whether it is published as a printed book. We recommend this
License principally for works whose purpose is instruction or reference.
1. APPLICABILITY AND DEFINITIONS
This License applies to any manual or other work, in any medium, that contains
a notice placed by the copyright holder saying it can be distributed under the terms
of this License. Such a notice grants a world-wide, royalty-free license, unlimited in
duration, to use that work under the conditions stated herein. The Document,
below, refers to any such manual or work. Any member of the public is a licensee,
and is addressed as you. You accept the license if you copy, modify or distribute
the work in a way requiring permission under copyright law.
A Modied Version of the Document means any work containing the Doc-
ument or a portion of it, either copied verbatim, or with modications and/or
translated into another language.
A Secondary Section is a named appendix or a front-matter section of
the Document that deals exclusively with the relationship of the publishers or au-
thors of the Document to the Documents overall subject (or to related matters)
and contains nothing that could fall directly within that overall subject. (Thus, if
the Document is in part a textbook of mathematics, a Secondary Section may not
explain any mathematics.) The relationship could be a matter of historical connec-
tion with the subject or with related matters, or of legal, commercial, philosophical,
ethical or political position regarding them.
The Invariant Sections are certain Secondary Sections whose titles are
designated, as being those of Invariant Sections, in the notice that says that the
Document is released under this License. If a section does not t the above def-
inition of Secondary then it is not allowed to be designated as Invariant. The
Document may contain zero Invariant Sections. If the Document does not identify
any Invariant Sections then there are none.
The Cover Texts are certain short passages of text that are listed, as Front-
Cover Texts or Back-Cover Texts, in the notice that says that the Document is
released under this License. A Front-Cover Text may be at most 5 words, and a
Back-Cover Text may be at most 25 words.
229
A Transparent copy of the Document means a machine-readable copy, rep-
resented in a format whose specication is available to the general public, that is
suitable for revising the document straightforwardly with generic text editors or (for
images composed of pixels) generic paint programs or (for drawings) some widely
available drawing editor, and that is suitable for input to text formatters or for
automatic translation to a variety of formats suitable for input to text formatters.
A copy made in an otherwise Transparent le format whose markup, or absence
of markup, has been arranged to thwart or discourage subsequent modication by
readers is not Transparent. An image format is not Transparent if used for any
substantial amount of text. A copy that is not Transparent is called Opaque.
Examples of suitable formats for Transparent copies include plain ASCII with-
out markup, Texinfo input format, LaTeX input format, SGML or XML using
a publicly available DTD, and standard-conforming simple HTML, PostScript or
PDF designed for human modication. Examples of transparent image formats in-
clude PNG, XCF and JPG. Opaque formats include proprietary formats that can be
read and edited only by proprietary word processors, SGML or XML for which the
DTD and/or processing tools are not generally available, and the machine-generated
HTML, PostScript or PDF produced by some word processors for output purposes
only.
The Title Page means, for a printed book, the title page itself, plus such
following pages as are needed to hold, legibly, the material this License requires to
appear in the title page. For works in formats which do not have any title page
as such, Title Page means the text near the most prominent appearance of the
works title, preceding the beginning of the body of the text.
The publisher means any person or entity that distributes copies of the
Document to the public.
A section Entitled XYZ means a named subunit of the Document whose ti-
tle either is precisely XYZ or contains XYZ in parentheses following text that trans-
lates XYZ in another language. (Here XYZ stands for a specic section name men-
tioned below, such as Acknowledgements, Dedications, Endorsements,
or History.) To Preserve the Title of such a section when you modify the
Document means that it remains a section Entitled XYZ according to this de-
nition.
230 APPENDIX C. GNU FREE DOCUMENTATION LICENSE
The Document may include Warranty Disclaimers next to the notice which
states that this License applies to the Document. These Warranty Disclaimers
are considered to be included by reference in this License, but only as regards
disclaiming warranties: any other implication that these Warranty Disclaimers may
have is void and has no eect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commer-
cially or noncommercially, provided that this License, the copyright notices, and
the license notice saying this License applies to the Document are reproduced in all
copies, and that you add no other conditions whatsoever to those of this License.
You may not use technical measures to obstruct or control the reading or further
copying of the copies you make or distribute. However, you may accept compensa-
tion in exchange for copies. If you distribute a large enough number of copies you
must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you
may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed
covers) of the Document, numbering more than 100, and the Documents license
notice requires Cover Texts, you must enclose the copies in covers that carry, clearly
and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-
Cover Texts on the back cover. Both covers must also clearly and legibly identify
you as the publisher of these copies. The front cover must present the full title with
all words of the title equally prominent and visible. You may add other material on
the covers in addition. Copying with changes limited to the covers, as long as they
preserve the title of the Document and satisfy these conditions, can be treated as
verbatim copying in other respects.
If the required texts for either cover are too voluminous to t legibly, you should
put the rst ones listed (as many as t reasonably) on the actual cover, and continue
the rest onto adjacent pages.
231
If you publish or distribute Opaque copies of the Document numbering more
than 100, you must either include a machine-readable Transparent copy along with
each Opaque copy, or state in or with each Opaque copy a computer-network lo-
cation from which the general network-using public has access to download using
public-standard network protocols a complete Transparent copy of the Document,
free of added material. If you use the latter option, you must take reasonably pru-
dent steps, when you begin distribution of Opaque copies in quantity, to ensure
that this Transparent copy will remain thus accessible at the stated location until
at least one year after the last time you distribute an Opaque copy (directly or
through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document
well before redistributing any large number of copies, to give them a chance to
provide you with an updated version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modied Version of the Document under the
conditions of sections 2 and 3 above, provided that you release the Modied Ver-
sion under precisely this License, with the Modied Version lling the role of the
Document, thus licensing distribution and modication of the Modied Version
to whoever possesses a copy of it. In addition, you must do these things in the
Modied Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of
the Document, and from those of previous versions (which should, if there
were any, be listed in the History section of the Document). You may use
the same title as a previous version if the original publisher of that version
gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible
for authorship of the modications in the Modied Version, together with at
least ve of the principal authors of the Document (all of its principal authors,
if it has fewer than ve), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modied Version,
as the publisher.
232 APPENDIX C. GNU FREE DOCUMENTATION LICENSE
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modications adjacent to the
other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the
public permission to use the Modied Version under the terms of this License,
in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required
Cover Texts given in the Documents license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled History, Preserve its Title, and add to it
an item stating at least the title, year, new authors, and publisher of the
Modied Version as given on the Title Page. If there is no section Entitled
History in the Document, create one stating the title, year, authors, and
publisher of the Document as given on its Title Page, then add an item
describing the Modied Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access
to a Transparent copy of the Document, and likewise the network locations
given in the Document for previous versions it was based on. These may be
placed in the History section. You may omit a network location for a work
that was published at least four years before the Document itself, or if the
original publisher of the version it refers to gives permission.
K. For any section Entitled Acknowledgements or Dedications, Preserve the
Title of the section, and preserve in the section all the substance and tone of
each of the contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text
and in their titles. Section numbers or the equivalent are not considered part
of the section titles.
M. Delete any section Entitled Endorsements. Such a section may not be
included in the Modied Version.
N. Do not retitle any existing section to be Entitled Endorsements or to con-
ict in title with any Invariant Section.
233
O. Preserve any Warranty Disclaimers.
If the Modied Version includes new front-matter sections or appendices that
qualify as Secondary Sections and contain no material copied from the Document,
you may at your option designate some or all of these sections as invariant. To
do this, add their titles to the list of Invariant Sections in the Modied Versions
license notice. These titles must be distinct from any other section titles.
You may add a section Entitled Endorsements, provided it contains nothing
but endorsements of your Modied Version by various partiesfor example, state-
ments of peer review or that the text has been approved by an organization as the
authoritative denition of a standard.
You may add a passage of up to ve words as a Front-Cover Text, and a passage
of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the
Modied Version. Only one passage of Front-Cover Text and one of Back-Cover
Text may be added by (or through arrangements made by) any one entity. If the
Document already includes a cover text for the same cover, previously added by
you or by arrangement made by the same entity you are acting on behalf of, you
may not add another; but you may replace the old one, on explicit permission from
the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give
permission to use their names for publicity for or to assert or imply endorsement
of any Modied Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this
License, under the terms dened in section 4 above for modied versions, provided
that you include in the combination all of the Invariant Sections of all of the original
documents, unmodied, and list them all as Invariant Sections of your combined
work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple
identical Invariant Sections may be replaced with a single copy. If there are multiple
Invariant Sections with the same name but dierent contents, make the title of
each such section unique by adding at the end of it, in parentheses, the name of
the original author or publisher of that section if known, or else a unique number.
234 APPENDIX C. GNU FREE DOCUMENTATION LICENSE
Make the same adjustment to the section titles in the list of Invariant Sections in
the license notice of the combined work.
In the combination, you must combine any sections Entitled History in the
various original documents, forming one section Entitled History; likewise com-
bine any sections Entitled Acknowledgements, and any sections Entitled Dedi-
cations. You must delete all sections Entitled Endorsements.
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents
released under this License, and replace the individual copies of this License in the
various documents with a single copy that is included in the collection, provided that
you follow the rules of this License for verbatim copying of each of the documents
in all other respects.
You may extract a single document from such a collection, and distribute it
individually under this License, provided you insert a copy of this License into the
extracted document, and follow this License in all other respects regarding verbatim
copying of that document.
7. AGGREGATION WITH INDEPENDENT
WORKS
A compilation of the Document or its derivatives with other separate and inde-
pendent documents or works, in or on a volume of a storage or distribution medium,
is called an aggregate if the copyright resulting from the compilation is not used
to limit the legal rights of the compilations users beyond what the individual works
permit. When the Document is included in an aggregate, this License does not ap-
ply to the other works in the aggregate which are not themselves derivative works
of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the
Document, then if the Document is less than one half of the entire aggregate,
the Documents Cover Texts may be placed on covers that bracket the Document
within the aggregate, or the electronic equivalent of covers if the Document is in
electronic form. Otherwise they must appear on printed covers that bracket the
whole aggregate.
235
8. TRANSLATION
Translation is considered a kind of modication, so you may distribute transla-
tions of the Document under the terms of section 4. Replacing Invariant Sections
with translations requires special permission from their copyright holders, but you
may include translations of some or all Invariant Sections in addition to the original
versions of these Invariant Sections. You may include a translation of this License,
and all the license notices in the Document, and any Warranty Disclaimers, provided
that you also include the original English version of this License and the original
versions of those notices and disclaimers. In case of a disagreement between the
translation and the original version of this License or a notice or disclaimer, the
original version will prevail.
If a section in the Document is Entitled Acknowledgements, Dedications, or
History, the requirement (section 4) to Preserve its Title (section 1) will typically
require changing the actual title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as
expressly provided under this License. Any attempt otherwise to copy, modify,
sublicense, or distribute it is void, and will automatically terminate your rights
under this License.
However, if you cease all violation of this License, then your license from a par-
ticular copyright holder is reinstated (a) provisionally, unless and until the copy-
right holder explicitly and nally terminates your license, and (b) permanently, if
the copyright holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated per-
manently if the copyright holder noties you of the violation by some reasonable
means, this is the rst time you have received notice of violation of this License
(for any work) from that copyright holder, and you cure the violation prior to 30
days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses
of parties who have received copies or rights from you under this License. If your
rights have been terminated and not permanently reinstated, receipt of a copy of
some or all of the same material does not give you any rights to use it.
236 APPENDIX C. GNU FREE DOCUMENTATION LICENSE
10. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions of the GNU
Free Documentation License from time to time. Such new versions will be similar
in spirit to the present version, but may dier in detail to address new problems or
concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the
Document species that a particular numbered version of this License or any later
version applies to it, you have the option of following the terms and conditions
either of that specied version or of any later version that has been published (not
as a draft) by the Free Software Foundation. If the Document does not specify a
version number of this License, you may choose any version ever published (not as
a draft) by the Free Software Foundation. If the Document species that a proxy
can decide which future versions of this License can be used, that proxys public
statement of acceptance of a version permanently authorizes you to choose that
version for the Document.
11. RELICENSING
Massive Multiauthor Collaboration Site (or MMC Site) means any World
Wide Web server that publishes copyrightable works and also provides prominent
facilities for anybody to edit those works. A public wiki that anybody can edit is
an example of such a server. A Massive Multiauthor Collaboration (or MMC)
contained in the site means any set of copyrightable works thus published on the
MMC site.
CC-BY-SA means the Creative Commons Attribution-Share Alike 3.0 license
published by Creative Commons Corporation, a not-for-prot corporation with a
principal place of business in San Francisco, California, as well as future copyleft
versions of that license published by that same organization.
Incorporate means to publish or republish a Document, in whole or in part,
as part of another Document.
An MMC is eligible for relicensing if it is licensed under this License, and if
all works that were rst published under this License somewhere other than this
MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no
cover texts or invariant sections, and (2) were thus incorporated prior to November
1, 2008.
237
The operator of an MMC Site may republish an MMC contained in the site
under CC-BY-SA on the same site at any time before August 1, 2009, provided the
MMC is eligible for relicensing.
ADDENDUM: How to use this License for your
documents
To use this License in a document you have written, include a copy of the
License in the document and put the following copyright and license notices just
after the title page:
Copyright YEAR YOUR NAME. Permission is granted to copy,
distribute and/or modify this document under the terms of the GNU
Free Documentation License, Version 1.3 or any later version pub-
lished by the Free Software Foundation; with no Invariant Sections, no
Front-Cover Texts, and no Back-Cover Texts. A copy of the license is
included in the section entitled GNU Free Documentation License.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace
the with . . . Texts. line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being
LIST.
If you have Invariant Sections without Cover Texts, or some other combination
of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend
releasing these examples in parallel under your choice of free software license, such
as the GNU General Public License, to permit their use in free software.
238 APPENDIX C. GNU FREE DOCUMENTATION LICENSE
Index
Active learning, xi
Bernstein polynomials, 152
Binary operation, 24, 28, 95
Chain, 220
Closure
under addition, 41
under multiplication, 41
Division algorithm, 4, 83
Euclid, 15
Eulers -function, 18
Fermats Little Theorem, 18, 192
Field, 48
examples, 48
eld extension, 51
nite, 50
multiplicative group, 48
Galois theory, 208, 210
Greatest common divisor, 6, 20
Group, 25, 157, 158
Gl
n
(F), 175
Sl
n
(F), 176
abelian, 26
center, 172
cyclic group, 173, 183
Dihedral group
D
3
, 165
D
4
, 169
direct product, 174
homomorphism, 198
Fundamental Theorem, 203
kernel, 199
isomorphism, 202
examples, 202
Lorentz group, 213
nonabelian, 161
normal subgroup, 194
order, 187
order of element, 183
Orthogonal group, 178, 204
orthogonal group
of a quadratic form, 212
quotient group, 197
subgroup, 180
coset, 187
subgroup generated by element, 182
symmetric group
S
2
, 162
S
3
, 159
S
n
, 161
d-cycle, 162
symmetry group of set with struc-
ture, 158, 206
239
240 INDEX
table, 159, 160
upper triangular invertible matrices,
176
Harmonic series, 19
Ideal, 52
coset with respect to, 57
examples, 54
ideal generated by a set, 56
principal ideal, 56
Integers, 1
addition, 25
composite, 9
division algorithm, 4
divisor, 3
greatest common divisor, 6, 20
least common multiple, 18
linear combination, 6
multiple, 3
multiplication, 26
prime, 9
relatively prime, 9
unique prime factorization, 11
Least common multiple, 18
Matrices
over R, 32
over arbitrary ring, 33
strictly upper triangular, 45
upper triangular, 44
Natural numbers, 2
Number system, 28
Partial Order, 219
Polynomial
Bernstein, 152
expression, 91
Polynomial expression, 91
Polynomials
division algorithm, 83
Prime, 9
innitely many, 15
Prime Number Theorem, 10
Principal Ideal Domain, 83
quadratic form, 211
Ring, 28
center, 80
commutative, 29
examples, 29
homomorphism, 63
examples, 67
Fundamental Theorem, 75
kernel, 66
integral domains, 46
invertible element, 47
irreducible element, 81
isomorphism, 63, 72
automorphism, 74
examples, 72
nilpotent element, 79
noncommutative, 29
quotient ring, 57, 60
examples, 62
ring extension, 41
unit, 47
zero-divisors, 45
Spanning set, 105
INDEX 241
redundancy, 106
Subeld, 51
Subring, 41
examples, 43
generated by an element, 90
examples, 92
test, 42
Subspace
test, 126
Unique prime factorization, 11
Vector Space
linear transformations
Fundamental Theorem, 146
Vector space, 96
basis, 111
examples, 111
basis vectors, 111
dimension, 120
examples, 97
isomorphism, 145
linear combination, 104
linear transformations, 133
kernel, 136
matrix representation, 137
linearly dependent, 109
linearly independent, 109
quotient space, 125, 129, 131
scalar multiplication, 97
scalars, 97
spanning set, 105
subspace, 125
coset with respect to, 130
examples, 127
vectors, 97
Weierstrass Approximation Theorem, 153
Well-Ordering Principle, 2
Zorns Lemma, 115, 122, 221