Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
90 views175 pages

Full

This textbook provides an introduction to real analysis through the historical development of the subject. It covers major topics in real analysis, such as the real number system, limits, continuity, power series, and convergence. Presenting the material in its historical context helps motivate the rigorous definitions and proofs in analysis. The book guides students in transforming intuitive understanding into mathematical arguments. It aims to effectively introduce students to real analysis by telling the story of how the subject evolved historically.

Uploaded by

Sa AIT YAHIA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views175 pages

Full

This textbook provides an introduction to real analysis through the historical development of the subject. It covers major topics in real analysis, such as the real number system, limits, continuity, power series, and convergence. Presenting the material in its historical context helps motivate the rigorous definitions and proofs in analysis. The book guides students in transforming intuitive understanding into mathematical arguments. It aims to effectively introduce students to real analysis by telling the story of how the subject evolved historically.

Uploaded by

Sa AIT YAHIA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 175

REAL ANALYSIS

Eugene Boman & Robert Rogers


Pennsylvania State University & SUNY
Fredonia
Prelude to Real Analysis
The typical introductory real analysis text starts with an analysis of the real number system and uses this to develop the definition
of a limit, which is then used as a foundation for the definitions encountered thereafter. While this is certainly a reasonable
approach from a logical point of view, it is not how the subject evolved, nor is it necessarily the best way to introduce students to
the rigorous but highly non-intuitive definitions and proofs found in analysis.
This book proposes that an effective way to motivate these definitions is to tell one of the stories (there are many) of the historical
development of the subject, from its intuitive beginnings to modern rigor. The definitions and techniques are motivated by the
actual difficulties encountered by the intuitive approach and are presented in their historical context. However, this is not a history
of analysis book. It is an introductory analysis textbook, presented through the lens of history. As such, it does not simply insert
historical snippets to supplement the material. The history is an integral part of the topic, and students are asked to solve problems
that occur as they arise in their historical context.
This book covers the major topics typically addressed in an introductory undergraduate course in real analysis in their historical
order. Written with the student in mind, the book provides guidance for transforming an intuitive understanding into rigorous
mathematical arguments. For example, in addition to more traditional problems, major theorems are often stated and a proof is
outlined. The student is then asked to fill in the missing details as a homework problem.
Thumbnail: Real number line with some constants such as π. (Public Domain; User:Phrood).

1 https://math.libretexts.org/@go/page/8296
Pennsylvania State University & SUNY
Fredonia
Real Analysis

Eugene Boman & Robert Rogers


This text is disseminated via the Open Education Resource (OER) LibreTexts Project (https://LibreTexts.org) and like the hundreds
of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all,
pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully
consult the applicable license(s) before pursuing such effects.
Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their
students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new
technologies to support learning.

The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform
for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our
students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-
access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource
environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being
optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are
organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields)
integrated.
The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot
Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions
Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120,
1525057, and 1413739.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation nor the US Department of Education.
Have questions or comments? For information about adoptions or adaptions contact [email protected]. More information on our
activities can be found via Facebook (https://facebook.com/Libretexts), Twitter (https://twitter.com/libretexts), or our blog
(http://Blog.Libretexts.org).
This text was compiled on 12/01/2023
Pennsylvania State University & SUNY
Fredonia
Real Analysis

Eugene Boman & Robert Rogers


This text is disseminated via the Open Education Resource (OER) LibreTexts Project (https://LibreTexts.org) and like the hundreds
of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all,
pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully
consult the applicable license(s) before pursuing such effects.
Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their
students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new
technologies to support learning.

The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform
for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our
students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-
access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource
environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being
optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are
organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields)
integrated.
The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot
Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions
Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120,
1525057, and 1413739.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation nor the US Department of Education.
Have questions or comments? For information about adoptions or adaptions contact [email protected]. More information on our
activities can be found via Facebook (https://facebook.com/Libretexts), Twitter (https://twitter.com/libretexts), or our blog
(http://Blog.Libretexts.org).
This text was compiled on 12/01/2023
TABLE OF CONTENTS
Prelude to Real Analysis
Licensing

1: Numbers - Real (ℝ) and Rational (ℚ)


1.1: Real and Rational Numbers
1.E: Numbers - Real (ℝ) and Rational (ℚ) (Exercises)

2: Calculus in the 17th and 18th Centuries


2.1: Newton and Leibniz Get Started
2.2: Power Series as Infinite Polynomials
2.E: Calculus in the 17th and 18th Centuries (Exercises)

3: Questions Concerning Power Series


3.1: Taylor’s Formula
3.2: Series Anomalies
3.E: Questions Concerning Power Series (Exercises)

4: Convergence of Sequences and Series


4.1: Sequences of Real Numbers
4.2: The Limit as a Primary Tool
4.3: Divergence of a Series
4.E: Convergence of Sequences and Series (Exercises)

5: Convergence of the Taylor Series- A “Tayl” of Three Remainders


5.1: The Integral Form of the Remainder
5.2: Lagrange’s Form of the Remainder
5.3: Cauchy’s Form of the Remainder
5.E: Convergence of the Taylor Series- A “Tayl” of Three Remainders (Exercises)

6: Continuity - What It Isn’t and What It Is


6.1: An Analytic Denition of Continuity
6.2: Sequences and Continuity
6.3: The Denition of the Limit of a Function
6.4: The Derivative - An Afterthought
6.E: Continuity - What It Isn’t and What It Is (Exercises)

7: Intermediate and Extreme Values


7.1: Completeness of the Real Number System
7.2: Proof of the Intermediate Value Theorem
7.3: The Bolzano-Weierstrass Theorem
7.4: The Supremum and the Extreme Value Theorem
7.E: Intermediate and Extreme Values (Exercises)

1 https://math.libretexts.org/@go/page/130313
8: Back to Power Series
8.1: Uniform Convergence
8.2: Uniform Convergence- Integrals and Derivatives
8.3: Radius of Convergence of a Power Series
8.4: Boundary Issues and Abel’s Theorem

9: Back to the Real Numbers


9.1: Trigonometric Series
9.2: Infinite Sets
9.3: Cantor’s Theorem and Its Consequences

10: Epilogue to Real Analysis


10.1: On the Nature of Numbers
10.2: Building the Real Numbers

Index
Index
Glossary
Detailed Licensing

2 https://math.libretexts.org/@go/page/130313
Licensing
A detailed breakdown of this resource's licensing can be found in Back Matter/Detailed Licensing.

1 https://math.libretexts.org/@go/page/115355
TABLE OF CONTENTS
Prelude to Real Analysis
Licensing

1: Numbers - Real (ℝ) and Rational (ℚ)


1.1: Real and Rational Numbers
1.E: Numbers - Real (ℝ) and Rational (ℚ) (Exercises)

2: Calculus in the 17th and 18th Centuries


2.1: Newton and Leibniz Get Started
2.2: Power Series as Infinite Polynomials
2.E: Calculus in the 17th and 18th Centuries (Exercises)

3: Questions Concerning Power Series


3.1: Taylor’s Formula
3.2: Series Anomalies
3.E: Questions Concerning Power Series (Exercises)

4: Convergence of Sequences and Series


4.1: Sequences of Real Numbers
4.2: The Limit as a Primary Tool
4.3: Divergence of a Series
4.E: Convergence of Sequences and Series (Exercises)

5: Convergence of the Taylor Series- A “Tayl” of Three Remainders


5.1: The Integral Form of the Remainder
5.2: Lagrange’s Form of the Remainder
5.3: Cauchy’s Form of the Remainder
5.E: Convergence of the Taylor Series- A “Tayl” of Three Remainders (Exercises)

6: Continuity - What It Isn’t and What It Is


6.1: An Analytic Denition of Continuity
6.2: Sequences and Continuity
6.3: The Denition of the Limit of a Function
6.4: The Derivative - An Afterthought
6.E: Continuity - What It Isn’t and What It Is (Exercises)

7: Intermediate and Extreme Values


7.1: Completeness of the Real Number System
7.2: Proof of the Intermediate Value Theorem
7.3: The Bolzano-Weierstrass Theorem
7.4: The Supremum and the Extreme Value Theorem
7.E: Intermediate and Extreme Values (Exercises)

1 https://math.libretexts.org/@go/page/29987
8: Back to Power Series
8.1: Uniform Convergence
8.2: Uniform Convergence- Integrals and Derivatives
8.3: Radius of Convergence of a Power Series
8.4: Boundary Issues and Abel’s Theorem

9: Back to the Real Numbers


9.1: Trigonometric Series
9.2: Infinite Sets
9.3: Cantor’s Theorem and Its Consequences

10: Epilogue to Real Analysis


10.1: On the Nature of Numbers
10.2: Building the Real Numbers

Index
Index
Glossary
Detailed Licensing

2 https://math.libretexts.org/@go/page/29987
CHAPTER OVERVIEW

1: Numbers - Real (ℝ) and Rational (ℚ)


1.1: Real and Rational Numbers
1.E: Numbers - Real (ℝ) and Rational (ℚ) (Exercises)

Thumbnail: Bust of Pythagoras of Samos in the Capitoline Museums, Rome. (CC BY-SA 3.0; Galilea).

This page titled 1: Numbers - Real (ℝ) and Rational (ℚ) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

1
1.1: Real and Rational Numbers
 Learning Objectives

Real and Rational numbers explained

The set of real numbers (denoted, R) is badly named. The real numbers are no more or less real – in the non-mathematical sense
that they exist – than any other set of numbers, just like the set of rational numbers (Q), the set of integers (Z), or the set of natural
numbers (N). The name “real numbers” is (almost) an historical anomaly not unlike the name “Pythagorean Theorem” which was
actually known and understood long before Pythagoras lived.
When calculus was being invented in the 17 century, numbers were thoroughly understood, or so it was believed. They were,
th

after all, just numbers. Combine them. We call that addition. If you add them repeatedly we call it multiplication. Subtraction and
division were similarly understood.
It was (and still is) useful to visualize these things in a more concrete way. If we take a stick of length 2 and another of length 3 and
lay them end-to-end we get a length of 5. This is addition. If we lay them end-to-end but at right angles then our two sticks are the
length and width of a rectangle whose area is 6. This is multiplication.
Of course measuring lengths with whole numbers has limitations, but these are not hard to fix. If we have a length (stick) of length
1 and another of length 2 , then we can find another whose length when compared to 1 is the same (has the same proportion as) as 1

is to 2. That number of course, is 1/2.

Figure 1.1.1 : Length stick.


Notice how fraction notation reects the operation of comparing 1 to 2. This comparison is usually referred to as the ratio of 1 to 2
so numbers of this sort are called rational numbers. The set of rational numbers is denoted Q for quotients. In grade school they
were introduced to you as fractions. Once fractions are understood, this visualization using line segments (sticks) leads quite
naturally to their representation with the rational number line.

Figure 1.1.2 : Rational number line.


This seems to work as a visualization because the points on a line and the rational numbers share certain properties. Chief among
these is that between any two points on the rational line there is another point, just as between any two rational numbers there is
another rational number.

 Exercise 1.1.1

Let a, b, c, d ∈ N and find a rational number between a/b and c/d.

This is all very clean and satisfying until we examine it just a bit closer. Then it becomes quite mysterious. Consider again the
rational numbers a/b and c/d. If we think of these as lengths we can ask, “Is there a third length, say α , such that we can divide
a/b into M pieces, each of length α and also divide c/d into N pieces each of length α ?" A few minutes thought should convince

you that this is the same as the problem of finding a common denominator so α = will work nicely. (Confirm this yourself.)
1

bd

You may be wondering what we’re making all of this fuss about. Obviously this is always true. In fact the previous paragraph gives
an outline of a very nice little proof of this. Here are the theorem and its proof presented formally.

1.1.1 https://math.libretexts.org/@go/page/7916
 Theorem 1.1.1

Let a , b , c , and d be integers. There is a number α ∈ Q such that M α = a/b and N α = c/d where M and N are also
integers.
Proof:
To prove this theorem we will display α , M and N . It is your responsibility to confirm that these actually work. Here they are:
α = 1/bd , M = ad , and N = cb .

 Exercise 1.1.2

Confirm that α , M , and N as given in the proof of Theorem 1.1.1 satisfy the requirements of the theorem.

It should be clear that it is necessary for a , b , c , and d to be integers for everything to work out. Otherwise M and N will not also
be integers as required.
This suggests the following very deep and important question: Are there lengths which can not be expressed as the ratio of two
integer lengths? The answer, of course, is yes. Otherwise we wouldn’t have asked the question. Notice that for such numbers our
proof of Theorem 1.1.1 is not valid (why not?).
One of the best known examples of such a number is the circumference of a circle with diameter 1. This is the number usually
denoted by π. But circles are extremely complex objects – they only seem simple because they are so familiar. Arising as it does
from a circle, you would expect the number π to be very complex as well and this is true. In fact π is an exceptionally weird
number for a variety of reasons. Let’s start with something a little easier to think about.
Squares are simple. Two sets of parallel lines at right angles, all of the same length. What could be simpler? If we construct a

square with sides having length 1 then its diagonal has length √2.

Figure 1.1.3 : Square.


This is a number which cannot be expressed as the ratio of two integers. That is, it is irrational. This has been known since ancient
times, but it is still quite disconcerting when first encountered. It seems so counter-intuitive that the intellect rebels. “This can’t be
right,” it says. “That’s just crazy!”
Nevertheless it is true and we can prove it is true as follows.
What happens if we suppose that the square root of two can be expressed as a ratio of integers? We will show that this leads
irrevocably to a conclusion that is manifestly not true.

Suppose √2 = a/b where a and b are integers. Suppose further that the fraction a/b is in lowest terms. This assumption is crucial
because if a/b is in lowest terms we know that at most only one of them is even.
So
a –
= √2 (1.1.1)
b

Squaring both sides gives:


2 2
a = 2b (1.1.2)

Therefore a is even. But if a is even then a must be even also (why?). If a is even then a = 2k for some integer k . Therefore
2 2

1.1.2 https://math.libretexts.org/@go/page/7916
2 2
4k = 2b (1.1.3)

or
2 2
2k =b (1.1.4)

Therefore b is also even and so b must be even too. But this is impossible. We’ve just concluded that
2
a and b are both even and
this conclusion follows directly from our initial assumption that at most one of them could be even.
This is nonsense. Where is our error? It is not in any single step of our reasoning. That was all solid. Check it again to be sure.

Therefore our error must be in the initial assumption that √2 could be expressed as a fraction. That assumption must therefore be

false. In other words, √2 cannot be so expressed.

 Exercise 1.1.3

Show that each of the following numbers is irrational:



a. √3

b. √5

c. √2
3


−−
d. i(= √−1)
e. The square root of every positive integer which is not the square of an integer.


The fact that √2 is not rational is cute and interesting, but unless, like the Pythagoreans of ancient Greece, you have a strongly held

religious conviction that all numbers are rational, it does not seem terribly important. On the other hand, the very existence of √2
raises some interesting questions. For example what can the symbol 4 possibly mean? If the exponent were a rational number,
√2

−−− –
say m/n, then clearly 4 m/n
= √4
n
. But since √2 ≠ m/n for any integers m and n how do we interpret 4 ? Does it have any
m √2

meaning at all.
The more you think about this, the more puzzling the existence of irrational numbers becomes. Suppose for example we reconsider

the construction of a line segment of length √2. It is clear that the construction works and that we really can build such a line
segment. It exists.
Repeat the construction but this time let’s put the base side on the rational line.

Figure 1.1.4 : Construction of a line segment of length \(\sqrt{2}\) with the base side on the rational line.
– –
We know that the diagonal of this square is √2 as indicated. And we know that √2 is not a rational number.
Now leave the diagonal pinned at (0, 0) but allow it to rotate down so that it coincides with the x−axis.

1.1.3 https://math.libretexts.org/@go/page/7916
Figure 1.1.5 : Diagonal rotated down to coincide with the x-axis.

The end of our diagonal will trace out an arc of the circle with radius . When the diagonal coincides with the
√2 x −axis, its

endpoint will obviously be the point (√2, 0) as shown.
But wait! We’re using the rational number line for our x−axis. That means the only points on the x−axis are those that correspond
– –
to rational numbers (fractions). But we know that √2 is not rational! Conclusion: There is no point (√2, 0). It simply doesn’t exist.

Put differently, there is a hole in the rational number line right where √2 should be. This is weird!
Recall that between any two rational numbers there is always another. This fact is what led us to represent the rational numbers
with a line in the first place.

Figure 1.1.6 : Rational number line.


– –
But it’s even worse than that. It’s straightforward to show that √3, √5, etc. are all irrational too. So are π and e , though they aren’t
as easy to show. It seems that the rational line has a bunch of holes in it. Infinitely many. And yet, the following theorem is true.

 Theorem 1.1.2

a. Between any two distinct real numbers there is a rational number.


b. Between any two distinct real numbers there is an irrational number.

Both parts of this theorem rely on a judicious use of what is now called the Archimedean Property of the Real Number System,
which can be formally stated as follows.

 Archimedean Property

Given any two positive real numbers, a and b , there is a positive integer, n such that na > b .

Physically this says that we can empty an ocean b with a teaspoon a, provided we are willing to use the teaspoon a large number of
times n .
This is such an intuitively straightforward concept that it is easy to accept it without proof. Until the invention of calculus, and even
for some time after that, it was simply assumed. However as the foundational problems posed by the concepts of calculus were
understood and solved we were eventually lead to a deeper understanding of the complexities of the real number system. The
Archimedean Property is no longer taken as an unproved axiom, but rather it is now understood to be a consequence of other
axioms. We will show this later, but for now we will accept it as obviously true just as Archimedes did.
With the invention of calculus, mathematicians of the seventeenth century began to use objects which didn’t satisfy the
Archimedean Property (in fact, so did Archimedes). As we shall see in the next chapter, when Leibniz wrote the first paper on his

1.1.4 https://math.libretexts.org/@go/page/7916
version of the calculus, he followed this practice by explicitly laying out rules for manipulating infinitely small quantities
(infinitesimals). These were taken to be actual numbers which are not zero and yet smaller than any real number. The notation he
used was dx (an infinitely small displacement in the x direction), and dy (an infinitely small displacement in the y direction).
dy
These symbols should look familiar to you. They are the same dy and dx used to form the derivative symbol dx
that you learned
about in calculus.
Mathematicians of the seventeenth and eighteenth centuries made amazing scientific and mathematical progress exploiting these
infinitesimals, even though they were foundationally suspect. No matter how many times you add the infinitesimal dx to itself the
result will not be bigger than, say 10
−1000
, which is very bizarre.
When foundational issues came to the forefront, infinitesimals fell somewhat out of favor. You probably didn’t use them very much
in calculus. Most of the time you probably used the prime notation, f (x) introduced by Lagrange in the eighteenth century. Some

of the themes in this book are: Why differentials fell out of favor, what were they replaced with and how the modern notations you
learned in calculus evolved over time.
To conclude this aside on the Archimedean Property, the idea of infinitesimals was revisited in the twentieth century by the logician
Abraham Robinson. Robinson was able to put the idea of infinitesimals on a solid logical foundation. But in the 18 century, the
th

existence of infinitesimal numbers was shaky to say the very least. However this did not prevent mathematicians from successfully
exploiting these infinitely small quantities.
We will come back to this saga in later chapters, but for now we return to Theorem 1.1.2.

 Sketch of Proof for theorem 1.1.2

We will outline the proof of part (a) of Theorem 1.1.2 and indicate how it can be used to prove part (b).
Let α and β be real numbers with α > β . There are two cases.
Case 1: α − β > 1 . In this case there is at least one integer between α and β. Since integers are rational we are done.
Case 2: α − β ≤ 1 . In this case, by the Archimedean Property there is a positive integer, say n , such that
n(α − β) = nα − nβ > 1 . Now there will be an integer between nα and nβ. You should now be able to find a rational

number between α and β.

For part (b), divide α and β by any positive irrational number and apply part (a). There are a couple of details to keep in mind.
These are considered in the following problem.

 Exercise 1.1.4

a. Prove that the product of a nonzero rational number and an irrational number is irrational.
b. Turn the above ideas into a proof of Theorem 1.1.2.

As a practical matter, the existence of irrational numbers isn’t really very important. In light of Theorem 1.1.2, any irrational

number can be approximated arbitrarily closely by a rational number. So if we’re designing a bridge and √2 is needed we just use
1.414 instead. The error introduced is less than 0.001 = 1/1000 so it probably doesn’t matter.

But from a theoretical point of view this is devastating. When calculus was invented, the rational numbers were suddenly not up to
the task of justifying the concepts and operations we needed to work with.
Newton explicitly founded his version of calculus on the assumption that we can think of variable quantities as being generated by
a continuous motion. If our number system has holes in it such continuous motion is impossible because we have no way to jump

over the gaps. So Newton simply postulated that there were no holes. He filled in the hole where √2 should be. He simply said, yes

there is a number there called √2 and he did the same with all of the other holes.
To be sure there is no record of Newton explicitly saying, “Here’s how I’m going to fill in the holes in the rational number line.”
Along with everyone else at the time, he simply assumed there were no holes and moved on. It took about 200 years of puzzling
and arguing over the contradictions, anomalies and paradoxes to work out the consequences of that apparently simple assumption.
The task may not yet be fully accomplished, but by the 20 century the properties of the real number system (R) as an extension
th

of the rational number system (Q) were well understood. Here are both systems visualized as lines:

1.1.5 https://math.libretexts.org/@go/page/7916
Figure 1.1.7 : Real and Rational number systems.
Impressive, no? The reason they look alike, except for the labels R and R of course, is that our ability to draw sketches of the
objects we’re studying utterly fails when we try to sketch R, as different from Q. All of the holes in R really are there, but the non-
holes are packed together so closely that we can’t separate them in a drawing. This inability to sketch the objects we study will be a
frequent source of frustration.
Of course, this will not stop us from drawing sketches. When we do our imaginations will save us because it is possible to imagine
Q as distinct from R . But put away the idea that a sketch is an accurate representation of anything. At best our sketches will only

be aids to the imagination.


So, at this point we will simply assume the existence of the real numbers. We will assume also that they have all of the properties
that we are used to. This is perfectly acceptable as long as we make our assumptions explicit. However we need to be aware that, so
far, the existence and properties of the real numbers is an assumption that has not been logically derived. Any time we make an
assumption we need to be prepared to either abandon it completely if we find that it leads to nonsensical results, or to re-examine
the assumption in the light of these results to see if we can find another assumption that subsumes the first and explains the
(apparently) nonsensical results.

This page titled 1.1: Real and Rational Numbers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

1.1.6 https://math.libretexts.org/@go/page/7916
1.E: Numbers - Real (ℝ) and Rational (ℚ) (Exercises)
Q1
Determine if each of the following is always rational or always irrational. Justify your answers.
a. The sum of two rational numbers.
b. The sum of two irrational numbers.
c. The sum of a rational and an irrational number.

Q2
Is it possible to have two rational numbers, a and b , such that b
a is irrational? If so, display an example of such a and b . If not,
prove that it is not possible.

Q3
Decide if it is possible to have two irrational numbers, a and b , such that a is rational. Prove it in either case.
b

This page titled 1.E: Numbers - Real (ℝ) and Rational (ℚ) (Exercises) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

1.E.1 https://math.libretexts.org/@go/page/7917
CHAPTER OVERVIEW

2: Calculus in the 17th and 18th Centuries


2.1: Newton and Leibniz Get Started
2.2: Power Series as Infinite Polynomials
2.E: Calculus in the 17th and 18th Centuries (Exercises)

Thumbnail: Engraving of Gottfried Wilhelm Leibniz. (Public Domain; Pierre Savart)

This page titled 2: Calculus in the 17th and 18th Centuries is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

1
2.1: Newton and Leibniz Get Started
 Learning Objectives
Explain Leibniz’s approach to the Product Rule
Explain Newton's approach to the Product Rule

Leibniz’s Calculus Rules


The rules for calculus were first laid out in Gottfried Wilhelm Leibniz’s 1684 paper Nova methodus pro maximis et minimis,
itemque tangentibus, quae nec fractas nec irrationales, quantitates moratur, et singulare pro illi calculi genus (A New Method for
Maxima and Minima as Well as Tangents, Which is Impeded Neither by Fractional Nor by Irrational Quantities, and a Remarkable
Type of Calculus for This).

Figure 2.1.1 : Gottfried Wilhelm Leibniz.


Leibniz started with subtraction. That is, if x and x are very close together then their difference, Δx = x − x , is very small.
1 2 2 1

He expanded this idea to say that if x and x are infinitely close together (but still distinct) then their difference, dx, is
1 2

infinitesimally small (but not zero).


This idea is logically very suspect and Leibniz knew it. But he also knew that when he used his calculus differentialis1 he was
getting correct answers to some very hard problems. So he persevered.
Leibniz called both Δx and dx “differentials” (Latin for difference) because he thought of them as, essentially, the same thing.
Over time it has become customary to refer to the infinitesimal dx as a differential, reserving “difference” for the finite case, Δx.
This is why calculus is often called “differential calculus.”
In his paper Leibniz gave rules for dealing with these infinitely small differentials. Specifically, given a variable quantity x, dx
represented an infinitesimal change in x. Differentials are related via the slope of the tangent line to a curve. That is, if y = f (x),
then dy and dx are related by
dy = (slope of the tangent line) ⋅ dx (2.1.1)

Leibniz then divided by dx giving


dy
= (slope of the tangent line) (2.1.2)
dx

The elegant and expressive notation Leibniz invented was so useful that it has been retained through the years despite some
dy
profound changes in the underlying concepts. For example, Leibniz and his contemporaries would have viewed the symbol as dx

an actual quotient of infinitesimals, whereas today we define it via the limit concept first suggested by Newton.
As a result the rules governing these differentials are very modern in appearance:
d(constant) = 0 (2.1.3)

d(z − y + w + x) = dz − dy + dw + dx (2.1.4)

2.1.1 https://math.libretexts.org/@go/page/7924
d(xv) = xdv + vdx (2.1.5)

v ydv − vdy
d( ) = (2.1.6)
y yy

and, when a is an integer:


a a−1
d(x ) = ax dx (2.1.7)

Leibniz states these rules without proof: “. . . the demonstration of all this will be easy to one who is experienced in such matters . .
..” As an example, mathematicians in Leibniz’s day would be expected to understand intuitively that if c is a constant, then
d(c) = c − c = 0 . Likewise, d(x + y) = dx + dy is really an extension of (x + y ) − (x + y ) = (x − x ) + (y − y )
2 2 1 1 2 1 2 1.

Leibniz’s Approach to the Product Rule


The explanation of the product rule using differentials is a bit more involved, but Leibniz expected that mathematicians would be
uent enough to derive it. The product p = xv can be thought of as the area of the following rectangle

Figure 2.1.2 : Area of a rectangle.


With this in mind, dp = d(xv) can be thought of as the change in area when x is changed by dx and v is changed by dv . This can
be seen as the L shaped region in the following drawing.

Figure 2.1.3 : Change in area when x is changed by dx and v is changed by dv .


By dividing the L shaped region into 3 rectangles we obtain
d(xv) = xdv + vdx + dxdv (2.1.8)

Even though dx and dv are infinitely small, Leibniz reasoned that dxdv is even more infinitely small (quadratically infinitely
small?) compared to xdv and vdx and can thus be ignored leaving
d(xv) = xdv + vdx (2.1.9)

You should feel some discomfort at the idea of simply tossing the product dxdv aside because it is “comparatively small.” This
means you have been well trained, and have thoroughly internalized Newton’s dictum [10]: “The smallest errors may not, in
mathematical matters, be scorned.” It is logically untenable to toss aside an expression just because it is small. Even less so should
we be willing to ignore an expression on the grounds that it is “infinitely smaller” than another quantity which is itself “infinitely
small.”

2.1.2 https://math.libretexts.org/@go/page/7924
Newton and Leibniz both knew this as well as we do. But they also knew that their methods worked. They gave verifiably correct
answers to problems which had, heretofore, been completely intractable. It is the mark of their genius that both men persevered in
spite of the very evident difficulties their methods entailed.

Newton’s Approach to the Product Rule


In the Principia, Newton “proved” the Product Rule as follows: Let x and v be “owing2 quantites” and consider the rectangle, R ,
whose sides are x and v . R is also a owing quantity and we wish to find its uxion (derivative) at any time.

Figure 2.1.4 : Isaac Newton


First increment x and v by Δx

2
and Δv

2
respectively. Then the corresponding increment of R is

Δx Δv Δv Δx ΔxΔv
(x + ) (v + ) = xv + x +v + (2.1.10)
2 2 2 2 4

Now decrement x and v by the same amounts:


Δx Δv Δv Δx ΔxΔv
(x − ) (v − ) = xv − x −v + (2.1.11)
2 2 2 2 4

Subtracting the right side of equation 2.1.11 from the right side of equation 2.1.10 gives

ΔR = x Δ v + v Δ x (2.1.12)

which is the total change of R = xv over the intervals Δx and Δv and also recognizably the Product Rule.
This argument is no better than Leibniz’s as it relies heavily on the number 1/2 to make it work. If we take any other increments in
x and v whose total lengths are Δx and Δv it will simply not work. Try it and see.

In Newton’s defense, he wasn’t really trying to justify his mathematical methods in the Principia. His attention there was on
physics, not math, so he was really just trying to give a convincing demonstration of his methods. You may decide for yourself how
convincing his demonstration is.
Notice that there is no mention of limits of difference quotients or derivatives. In fact, the term derivative was not coined until
1797, by Lagrange. In a sense, these topics were not necessary at the time, as Leibniz and Newton both assumed that the curves
they dealt with had tangent lines and, in fact, Leibniz explicitly used the tangent line to relate two differential quantities. This was
consistent with the thinking of the time and for the duration of this chapter we will also assume that all quantities are differentiable.
As we will see later this assumption leads to difficulties.
Both Newton and Leibniz were satisfied that their calculus provided answers that agreed with what was known at the time. For
example d(x ) = d(xx) = xdx + xdx = 2xdx and d(x ) = d(x x) = x dx + xd(x ) = x + x(2xdx) = 3x dx , results that
2 3 2 2 2 2 2

were essentially derived by others in different ways.

2.1.3 https://math.libretexts.org/@go/page/7924
 Exercise 2.1.1
a. Use Leibniz’s product rule d(xv) = xdv + vdx to show that if n is a positive integer then d(x n n−1
) = nx dx

b. Use Leibniz’s product rule to derive the quotient rule


v ydv − vdy
d( ) = (2.1.13)
y yy

c. Use the quotient rule to show that if nis a positive integer, then
−n −n−1
d(x ) = −nx dx (2.1.14)

 Exercise 2.1.2

Let p and q be integers with q ≠ 0 . Show


p
p p
−1
d (x q
) = x q
dx (2.1.15)
q

Leibniz also provided applications of his calculus to prove its worth. As an example he derived Snell’s Law of Refraction from his
calculus rules as follows.
Given that light travels through air at a speed of v and travels through water at a speed of v the problem is to find the fastest path
a w

from point A to point B .

Figure 2.1.5 : Fastest path that light travels from point A to point B .
According to Fermat’s Principle of Least Time, this fastest path is the one that light will travel.
Using the fact that T ime = Distance/V elocity and the labeling in the picture below we can obtain a formula for the time T it
takes for light to travel from A to B .

Figure 2.1.6 : Fermat’s Principle of Least Time.

2.1.4 https://math.libretexts.org/@go/page/7924
− −−−− − − −−−−−−−− −
√ x2 + a2 √ (c − x )2 + b2
T = + (2.1.16)
va vw

Using the rules of Leibniz’s calculus, we obtain


1 1 2 2 −
1
1 1 2 2 −
1

dT =( (x +a ) 2
(2x) + ((c − x ) +b ) 2
(2(c − x)(−1))) dx
va 2 vw 2

1 x 1 c −x
=( − −− −−− − − −−−−−−−− − ) dx
va √ x2 + a2 vw √ (c − x )2 + b2

Using the fact that at the minimum value for T , dT =0 , we have that the fastest path from A to B must satisfy
1

va
x
=
1

vw
. Inserting the following angles
c−x

√x2 +a2 2 2
√(c−x) +b

Figure 2.1.7 : Fastest path that light travels.


we get that the path that light travels must satisfy
sin θa sin θw
= (2.1.17)
va vw

which is Snell’s Law.


To compare 18 century and modern techniques we will consider Johann Bernoulli’s solution of the Brachistochrone problem. In
th

1696, Bernoulli posed, and solved, the Brachistochrone problem; that is, to find the shape of a frictionless wire joining points A
and B so that the time it takes for a bead to slide down under the force of gravity is as small as possible.

Figure 2.1.8 : Finding shape of a frictionless wire joining points A and B .


Bernoulli posed this “path of fastest descent” problem to challenge the mathematicians of Europe and used his solution to
demonstrate the power of Leibniz’s calculus as well as his own ingenuity.
I, Johann Bernoulli, address the most brilliant mathematicians in the world. Nothing is more attractive to intelligent people
than an honest, challenging problem, whose possible solution will bestow fame and remain as a lasting monument.
Following the example set by Pascal, Fermat, etc., I hope to gain the gratitude of the whole scientific community by placing

2.1.5 https://math.libretexts.org/@go/page/7924
before the finest mathematicians of our time a problem which will test their methods and the strength of their intellect. If
someone communicates to me the solution of the proposed problem, I shall publicly declare him worthy of praise. [11]

Figure 2.1.9 : Johann Bernoulli.


In addition to Johann’s, solutions were obtained from Newton, Leibniz, Johann’s brother Jacob Bernoulli, and the Marquis de
l’Hopital [15]. At the time there was an ongoing and very vitriolic controversy raging over whether Newton or Leibniz had been
the first to invent calculus. An advocate of the methods of Leibniz, Bernoulli did not believe Newton would be able to solve the
problem using his methods. Bernoulli attempted to embarrass Newton by sending him the problem. However Newton did solve it.
At this point in his life Newton had all but quit science and mathematics and was fully focused on his administrative duties as
Master of the Mint. In part due to rampant counterfeiting, England’s money had become severely devalued and the nation was on
the verge of economic collapse. The solution was to recall all of the existing coins, melt them down, and strike new ones. As
Master of the Mint this job fell to Newton [8]. As you might imagine this was a rather Herculean task. Nevertheless, according to
his niece:
When the problem in 1696 was sent by Bernoulli–Sir I.N. was in the midst of the hurry of the great recoinage and did not
come home till four from the Tower very much tired, but did not sleep till he had solved it, which was by four in the morning.
(quoted in [2], page 201)
He is later reported to have complained, “I do not love ... to be ... teezed by forreigners about Mathematical things [2].”
Newton submitted his solution anonymously, presumably to avoid more controversy. Nevertheless the methods used were so
distinctively Newton’s that Bernoulli is said to have exclaimed “Tanquam ex ungue leonem.”3
Bernoulli’s ingenious solution starts, interestingly enough, with Snell’s Law of Refraction. He begins by considering the stratified
medium in the following figure, where an object travels with velocities v , v , v , . . . in the various layers.
1 2 3

Figure 2.1.10 : Bernoulli's solution.


By repeatedly applying Snell’s Law he concluded that the fastest path must satisfy

2.1.6 https://math.libretexts.org/@go/page/7924
sin θ1 sin θ2 sin θ3
= = =⋯ (2.1.18)
v1 v2 v3

In other words, the ratio of the sine of the angle that the curve makes with the vertical and the speed remains constant along this
fastest path.
If we think of a continuously changing medium as stratified into infinitesimal layers and extend Snell’s law to an object whose
speed is constantly changing,

Figure 2.1.11 : Snell's law for an object changing speed continuously.


then along the fastest path, the ratio of the sine of the angle that the curve’s tangent makes with the vertical, α , and the speed, v ,
must remain constant.
sin α
=c (2.1.19)
v

If we include axes and let P denote the position of the bead at a particular time then we have the following picture.

Figure 2.1.11 : Path traveled by the bead.


In the above figure, s denotes the length that the bead has traveled down to point P (that is, the arc length of the curve from the
origin to that point) and a denotes the tangential component of the acceleration due to gravity g . Since the bead travels only under
the inuence of gravity then dv

dt
= a.

To get a sense of how physical problems were approached using Leibniz’s calculus we will use the above equation to show that
−−

v = √2gy .

2.1.7 https://math.libretexts.org/@go/page/7924
dy dy
By similar triangles we have a

g
=
ds
. As a student of Leibniz, Bernoulli would have regarded ds
as a fraction so

ads = gdy (2.1.20)

and since acceleration is the rate of change of velocity we have


dv
ds = gdy (2.1.21)
dt

Again, 18 century European mathematicians regarded dv , dt , and ds as infinitesimally small numbers which nevertheless obey
th

all of the usual rules of algebra. Thus we can rearrange the above to get
ds
dv = gdy (2.1.22)
dt

Since ds

dt
is the rate of change of position with respect to time it is, in fact, the velocity of the bead. That is

vdv = gdy (2.1.23)

Bernoulli would have interpreted this as a statement that two rectangles of height v and g , with respective widths dv and dy have
equal area. Summing (integrating) all such rectangles we get:

∫ vdv = ∫ gdy (2.1.24)

2
v
= gy (2.1.25)
2

or
−−

v = √2gy (2.1.26)

You are undoubtedly uncomfortable with the cavalier manipulation of infinitesimal quantities you’ve just witnessed, so we’ll pause
for a moment now to compare a modern development of equation 2.1.12 to Bernoulli’s. As before we begin with the equation:
a dy
= (2.1.27)
g ds

dy
a =g (2.1.28)
ds

Moreover, since acceleration is the derivative of velocity this is the same as:
dv dy
=g (2.1.29)
dt ds

Now observe that by the Chain Rule =


dv

dt
. The physical interpretation of this formula is that velocity will depend on s , how
dv ds

ds dt

far down the wire the bead has moved, but that the distance traveled will depend on how much time has elapsed. Therefore
dv ds dy
=g (2.1.30)
ds dt ds

or
ds dv dy
=g (2.1.31)
dt ds ds

and since ds

dt
=v

dv dy
v =g (2.1.32)
ds ds

Integrating both sides with respect to s gives:


dv dy
∫ v ds = g ∫ ds (2.1.33)
ds ds

2.1.8 https://math.libretexts.org/@go/page/7924
∫ vdv = g ∫ dy (2.1.34)

and integrating gives


2
v
= gy (2.1.35)
2

as before.
In effect, in the modern formulation we have traded the simplicity and elegance of differentials for a comparatively cumbersome
repeated use of the Chain Rule. No doubt you noticed when taking Calculus that in the differential notation of Leibniz, the Chain
dy dy
Rule looks like “canceling” an expression in the top and bottom of a fraction: = . This is because for 18th century
du dx
du

dx

mathematicians, this is exactly what it was.


To put it another way, 18 century mathematicians wouldn’t have recognized a need for what we call the Chain Rule because this
th

operation was a triviality for them. Just reduce the fraction. This begs the question: Why did we abandon such a clear, simple
interpretation of our symbols in favor of the, comparatively, more cumbersome modern interpretation? This is one of the questions
we will try to answer in this course.
Returning to the Brachistochrone problem we observe that sin α

v
=c and since sin α = dx

ds
we see that
dx

ds
=c (2.1.36)
−−

√2gy

dx
−−−− −−− =c (2.1.37)
2
√2gy(ds)

dx
− −−−−− −−− −−−−− − =c (2.1.38)
2 2
√ 2gy [(dx ) + (dy ) ]

Bernoulli was then able to solve this differential equation.

 Exercise 2.1.3

Show that the equations x =


t−sin t

4gc2
, y =
t−cos t

4gc2
satisfy equation 2.1.37 . Bernoulli recognized this solution to be an inverted
cycloid, the curve traced by a fixed point on a circle as the circle rolls along a horizontal surface.

This illustrates the state of calculus in the late 1600’s and early 1700’s; the foundations of the subject were a bit shaky but there was
no denying its power.

References
1
This translates, loosely, as the calculus of differences.
2
Newton’s approach to calculus – his ‘Method of Fluxions’ – depended fundamentally on motion. That is, he viewed his variables
(uents) as changing (owing or uxing) in time. The rate of change of a uent he called a uxion. As a foundation both Leibniz’s
and Newton’s approaches have fallen out of favor, although both are still universally used as a conceptual approach, a “way of
thinking,” about the ideas of calculus.
3
I know the lion by his claw.

This page titled 2.1: Newton and Leibniz Get Started is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

2.1.9 https://math.libretexts.org/@go/page/7924
2.2: Power Series as Infinite Polynomials
 Learning Objectives

Rules of differential and integral calculus applied to polynomials

Applied to polynomials, the rules of differential and integral calculus are straightforward. Indeed, differentiating and integrating
polynomials represent some of the easiest tasks in a calculus course. For example, computing

2
∫ (7 − x + x )dx (2.2.1)

is relatively easy compared to computing


−−−− −
3 3
∫ √ 1 + x dx. (2.2.2)

Unfortunately, not all functions can be expressed as a polynomial. For example, f (x) = sin x cannot be since a polynomial has
only finitely many roots and the sine function has infinitely many roots, namely {nπ|n ∈ Z}. A standard technique in the 18 th

century was to write such functions as an “infinite polynomial,” what we typically refer to as a power series. Unfortunately an
“infinite polynomial” is a much more subtle object than a mere polynomial, which by definition is finite. For now we will not
concern ourselves with these subtleties. We will follow the example of our forebears and manipulate all “polynomial-like” objects
(finite or infinite) as if they are polynomials.

 Definition 2.2.1: Power Series

A power series centered at a is a series of the form


n 2
∑ an (x − a) = a0 + a1 (x − a) + a2 (x − a) +⋯ (2.2.3)

n=0

Often we will focus on the behavior of power series ∑ ∞

n=0
an x
n
, centered around 0, as the series centered around other values of a
are obtained by shifting a series centered at 0.
Before we continue, we will make the following notational comment. The most advantageous way to represent a series is using
summation notation since there can be no doubt about the pattern to the terms. After all, this notation contains a formula for the
general term. This being said, there are instances where writing this formula is not practical. In these cases, it is acceptable to write
the sum by supplying the first few terms and using ellipses (the three dots). If this is done, then enough terms must be included to
make the pattern clear to the reader.
Returning to our definition of a power series, consider, for example, the geometric series

n +
∑x = 1 +x +x ⋯. (2.2.4)

n=0

If we multiply this series by (1 − x) , we obtain


2 2 2 3
(1 − x)(1 + x + x + ⋯) = (1 + x + x + ⋯) − (x + x +x + ⋯) = 1 (2.2.5)

This leads us to the power series representation



1
2 n
= 1 +x +x +⋯ = ∑x (2.2.6)
(1 − x)
n=0

If we substitute x = 1

10
into the above, we obtain
2 3
1 1 1 1 10
1+ +( ) +( ) +⋯ = = (2.2.7)
1
10 10 10 1− 9
10

2.2.1 https://math.libretexts.org/@go/page/7927
This agrees with the fact that 0.333 ⋯ = , and so 0.111v ⋯ = , and 1.111 ⋯ =
1

3
1

9
10

9
.
There are limitations to these formal manipulations however. Substituting x = 1 or x = 2 yields the questionable results
1 1 2
= 1 + 1 + 1 + ⋯ and = = 1 +2 +2 +⋯ (2.2.8)
0 −1

We are missing something important here, though it may not be clear exactly what. A series representation of a function works
sometimes, but there are some problems. For now, we will continue to follow the example of our 18 century predecessors and th

ignore them. That is, for the rest of this section we will focus on the formal manipulations to obtain and use power series
representations of various functions. Keep in mind that this is all highly suspect until we can resolve problems like those just given.
Power series became an important tool in analysis in the 1700’s. By representing various functions as power series they could be
dealt with as if they were (infinite) polynomials. The following is an example.

 Example 2.2.1:

Solve the following Initial Value problem:1 Find y(x) given that
dy

dx
=y , y(0) = 1 .
Solution
Assuming the solution can be expressed as a power series we have

n 2
y = ∑ an x = a0 + a1 x + a2 x +⋯

n=0

Differentiating gives us
dy
2 3
= a1 + 2 a2 x + 3 a3 x + 4 a4 x +⋯
dx

dy
Since dx
=y we see that

a1 = a0 , 2 a2 = a1 , 3 a3 = a2 , . . . , nan = an−1 , . . . .

This leads to the relationship


1 1 1
an = an−1 = an−2 = ⋯ = a0
n n(n − 1) n!

Thus the series solution of the differential equation is


∞ ∞
a0 n
1 n
y =∑ x = a0 ∑ x
n! n!
n=0 n=0

Using the initial condition y(0) = 1 , we get 1 = a (1 + 0 + 00


1

2!
2
+ ⋯) = a0 . Thus the solution to the initial problem is

x . Let's call this function E(x). Then by definition
1 n
y =∑
n=0 n!

∞ 1 2 3
1 n
x x x
E(x) = ∑ x =1+ + + +⋯
n! 1! 2! 3!
n=0

Let’s examine some properties of this function. The first property is clear from definition 2.2.1.
Property 1
E(0) = 1 (2.2.9)

Property 2
E(x + y) = E(x)E(y) (2.2.10)

To see this we multiply the two series together, so we have

2.2.2 https://math.libretexts.org/@go/page/7927
∞ ∞
1 1
n n
E(x)E(y) = ( ∑ x ) (∑ y )
n! n!
n=0 n=0

0 1 2 3 0 1 2 3
x x x x y y y y
=( + + + + ⋯) ( + + + + ⋯)
0! 1! 2! 3! 0! 1! 2! 3!

0 0 0 1 1 0 0 2 1 1 2 0 0 3 1 2 2 1 3 0
x y x y x y x y x y x y x y x y x y x y
= + + + + + + + + + +⋯
0! 0! 0! 1! 1! 0! 0! 2! 1! 1! 2! 0! 0! 3! 1! 2! 2! 1! 3! 0!
0 0 0 1 1 0 0 2 1 1 2 0 0 3 1 2 2 1 3 0
x y x y x y x y x y x y x y x y x y x y
= +( + ) +( + + ) +( + + + )
0! 0! 0! 1! 1! 0! 0! 2! 1! 1! 2! 0! 0! 3! 1! 2! 2! 1! 3! 0!

1 1 1! 1! 1 2! 2! 2!
0 1 1 0 0 2 1 1 2 0
= + ( x y + x y )+ ( x y + x y + x y )
0! 1! 0!1! 1!0! 2! 0!2! 1!1! 2!0!

1 3! 0 3
3! 1 1
3! 2 1
3! 3 0
+ ( x y + x y + x y + x y ) +⋯
3! 0!3! 1!2! 2!1! 3!0!

1 1 1 0 1
1 1 0
1 2 0 2
2 1 1
2 2 0
E(x)E(y) = + (( )x y + ( )x y ) + (( ) x y + ( ) x y + ( ) x y )
0! 1! 0 1 2! 0 1 2

1 3 3 3 3
0 3 1 2 2 1 3 0
+ (( )x y + ( )x y + ( )x y + ( )x y ) + ⋯
3! 0 1 2 3

1 1 1
1 2
1 3
= + (x + y ) + (x + y ) + (x + y ) +⋯
0! 1! 2! 3!

= E(x + y)

Property 3 If m is a positive integer then


m
E(mx) = (E(x)) (2.2.11)

In particular, E(m) = (E(1)) . m

 Exercise 2.2.1

Prove Property 3.

Property 4
1 −1
E(−x) = = (E(x)) (2.2.12)
E(x)

 Exercise 2.2.2

Prove Property 4.

Property 5 If n is an integer with n ≠ 0 , then


−−−−
1 n 1/n
E( ) = √E(1) = (E(1)) (2.2.13)
n

 Exercise 2.2.3

Prove Property 5.

Property 6 If m and n are integers with n ≠ 0 , then


m m/n
E( ) = (E(1)) (2.2.14)
n

2.2.3 https://math.libretexts.org/@go/page/7927
 Exercise 2.2.4
Prove Property 6.

 Definition 2.2.2: E

Let E(1) be denoted by the number e . Using the series



1
e = E(1) = ∑ (2.2.15)
n!
n=0

we can approximate e to any degree of accuracy. In particular e ≈ 2.71828

In light of Property 6, we see that for any rational number r, E(r) = e . Not only does this give us the series representation \(e^r =
r

\sum_{n = 0}^{\infty } \frac{1}{n!}r^n\) for any rational number r, but it gives us a way to define e for irrational values of x

x as well. That is, we can define



1
x n
e = E(x) = ∑ x (2.2.16)
n!
n=0

for any real number x.


∞ – n
As an illustration, we now have e √2
= ∑n=0
1

n!
(√2) . The expression e √2
is meaningless if we try to interpret it as one irrational
number raised to another. What does it mean to raise anything to the \(\sqrt{2}\) power? However the series ∑

(√2) does ∞

n=0
1

n!
n

seem to have meaning and it can be used to extend the exponential function to irrational exponents. In fact, defining the
exponential function via this series answers the question we raised earlier: What does 4 mean? √2

It means
∞ – n
(√2 log 4 )
√2 √2 log 4
4 =e =∑ (2.2.17)
n!
n=0

This may seem to be the long way around just to define something as simple as exponentiation. But this is a fundamentally
misguided attitude. Exponentiation only seems simple because we’ve always thought of it as repeated multiplication (in Z) or root-
taking (in Q). When we expand the operation to the real numbers this simply can’t be the way we interpret something like 4 . √2


How do you take the product of √2 copies of 4? The concept is meaningless. What we need is an interpretation of 4 which is √2


consistent with, say 4 = (√4) = 8 .This is exactly what the series representation of e provides.
3/2 3 x

We also have a means of computing integrals as series. For example, the famous “bell shaped” curve given by the function
2
x

f (x) =
1

√2π
e

2
is of vital importance in statistics and must be integrated to calculate probabilities. The power series we
developed gives us a method of integrating this function. For example, we have
b b ∞ n
2 2
1 −
x 1 1 −x
∫ −−e
2
dx = −− ∫ (∑ ( )) dx
x=0 √2π √2π x=0
n! 2
n=0

∞ n b
1 (−1)
2n
= −− ∑( n
∫ x dx)
√2π n!2 x=0
n=0

∞ n 2n+1
1 (−1) b
= ∑( )
−− n
√2π n! 2 (2n + 1)
n=0

This series can be used to approximate the integral to any degree of accuracy. The ability to provide such calculations made power
series of paramount importance in the 1700’s.

2.2.4 https://math.libretexts.org/@go/page/7927
 Exercise 2.2.5
2
d y
a. Show that if y = ∑ ∞

n=0
an x
n
satisfies the differential equation dx2
= −y then

−1
an+2 = an (2.2.18)
(n + 2)(n + 1)

and conclude that


1 2
1 3
1 4
1 5
1 6
1 7
y = a0 + a1 x − a0 x − a1 x + a0 x + a1 x − a0 x − a1 x +⋯
2! 3! 4! 5! 6! 7!

2
d y
b. Since y = sin x satisfies dx2
= −y we see that

1 2
1 3
1 4
1 5
1 6
1 7
sin x = a0 + a1 x − a0 x − a1 x + a0 x + a1 x − a0 x − a1 x +⋯ (2.2.19)
2! 3! 4! 5! 6! 7!

for some constants a and a . Show that in this case a


0 1 0 =0 and a 1 =1 and obtain
∞ n
1 1 1 (−1)
3 5 7 2n+1
sin x = x − x + x − x +⋯ = ∑ x
3! 5! 7! (2n + 1)!
n=0

 Exercise 2.2.6
a. Use the series
∞ n
1 1 1 (−1)
3 5 7 2n+1
sin x = x − x + x − x +⋯ = ∑ x (2.2.20)
3! 5! 7! (2n + 1)!
n=0

to obtain the series


∞ n
1 1 1 (−1)
2 4 6 2n
cos x = 1 − x + x − x +⋯ = ∑ x
2! 4! 6! (2n)!
n=0

n n
(−1) (−1)
b. Let s(x, N ) = ∑ N

n=0 (2n+1)!
x
2n+1
and c(x, N ) = ∑ N

n=0 (2n)!
x
2n
and use a computer algebra system to plot these for
−4π ≤ x ≤ 4π ,N = 1, 2, 5, 10, 15 . Describe what is happening to the series as N becomes larger.

 Exercise 2.2.7

Use the geometric series, 1

1−x
= 1 +x +x
2
+x
3
+⋯ = ∑

n=0
x
n
, to obtain a series for 1

1+x
2
and use this to obtain the
series

1 3
1 5 n
1 2n+1
arctan x = x − x + x − ⋯ = ∑(−1 ) x (2.2.21)
3 5 2n + 1
n=0

Use the series above to obtain the series



π n
1
= ∑(−1 )
4 2n + 1
n=0

The series for arctangent was known by James Gregory (1638-1675) and it is sometimes referred to as “Gregory’s series.” Leibniz
independently discovered =1−
π

4
+ − +⋯
1

3
by examining the area of a circle. Though it gives us a means for
1

5
1

approximating π to any desired accuracy, the series converges too slowly to be of any practical use. For example, if we compute the
sum of the first 1000 terms we get
1000
1
n
4 ( ∑) (−1 ) ≈ 3.142591654 (2.2.22)
2n + 1
n=0

2.2.5 https://math.libretexts.org/@go/page/7927
which only approximates π to two decimal places.
Newton knew of these results and the general scheme of using series to compute areas under curves. These results motivated
Newton to provide a series approximation for π as well, which, hopefully, would converge faster. We will use modern terminology
−−−− − 1
to streamline Newton’s ideas. First notice that = ∫ π

4
√1 − x dx as this integral gives the area of one quarter of the unit circle.
x=0
2

−−−− −
The trick now is to find series that represents √1 − x2 .
To this end we start with the binomial theorem
N

N
N N −n n
(a + n) = ∑( )a b (2.2.23)
n
n=0

where
N N!
( ) =
n n!(N − n)!

N (N − 1)(N − 2) ⋯ (N − n + 1)
=
n!
n−1
∏ (N − j)
j=0
=
n!

Unfortunately, we now have a small problem with our notation which will be a source of confusion later if we don’t fix it. So we
will pause to address this matter. We will come back to the binomial expansion afterward.
This last expression is becoming awkward in much the same way that an expression like
2 3 k
1 1 1 1
1+ +( ) +( ) +⋯ +( ) (2.2.24)
2 2 2 2

n
is awkward. Just as this sum becomes is less cumbersome when written as ∑ k

n=0
(
1

2
) the product

N (N − 1)(N − 2) ⋯ (N − n + 1) (2.2.25)

n−1
is less awkward when we write it as ∏ j=0
(N − j) .
A capital pi (Π ) is used to denote a product in the same way that a capital sigma (Σ) is used to denote a sum. The most familiar
example would be writing
n

n! = ∏ j (2.2.26)

j=1

0
Just as it is convenient to define 0! = 1 , we will find it convenient to define ∏ = 1 . Similarly, the fact that (
j=1
N

0
) =1 leads to
(N − j) = 1 . Strange as this may look, it is convenient and is consistent with the convention ∑ s = 0.
−1 −1
∏ j
j=0 j=0

Returning to the binomial expansion and recalling our convention


−1

∏(N − j) = 1 (2.2.27)

j=0

we can write, 2
n−1 n−1
N N
∏ (N − j) ∏ (N − j)
j=0 j=0
N n n
(1 + x ) = 1 +∑( )x = ∑( )x (2.2.28)
n! n!
n=1 n=1

There is an advantage to using this convention (especially when programing a product into a computer), but this is not a deep
mathematical insight. It is just a notational convenience and we don’t want you to fret over it, so we will use both formulations (at
least initially).
n−1
Notice that we can extend the above definition of ( ) to values n > N . In this case, ∏ (N − j) will equal 0 as one of the
N

n j=0

factors in the product will be 0 (the one where n = N ). This gives us that ( ) = 0 when n > N and so (1 + x)N = 1 + ∞ X n=1
N

2.2.6 https://math.libretexts.org/@go/page/7927
Qn−1 j=0 (N −j) n! !xn = ∞ X n=0 Qn−1 j=0 (N −j) n! !xn
n−1 n−1
∞ ∞
∏ (N − j) ∏ (N − j)
j=0 j=0
N n n
(1 + x ) = 1 +∑( )x = ∑( )x (2.2.29)
n! n!
n=1 n=1

holds true for any nonnegative integer N . Essentially Newton asked if it could be possible that the above equation could hold
values of N which are not nonnegative integers. For example, if the equation held true for N = , we would obtain 1

n−1 1 n−1 1
∞ ∞
1 ∏j=0 ( − j) ∏j=0 ( − j)
2 n 2 n
(1 + x ) 2
= 1 +∑( )x = ∑( )x (2.2.30)
n! n!
n=1 n=1

or
1 1 1 1 1
1 1 ( − 1) ( − 1)( − 2)
2 2 2 2 2 2 3
(1 + x ) 2 =1+ x+ x + x +⋯ (2.2.31)
2 2! 3!

Notice that since 1/2 is not an integer the series no longer terminates. Although Newton did not prove that this series was correct
(nor did we), he tested it by multiplying the series by itself. When he saw that by squaring the series he started to obtain
−−−−−
1 + x + 0 x + 0 x + ⋅ ⋅ ⋅ , he was convinced that the series was exactly equal to √1 + x .
2 3

 Exercise 2.2.8

Consider the series representation


n−1 1

1
∏ ( − j)
j=0 2 n
(1 + x) 2 = 1 +∑( )x
n!
n=1

n−1 1

∏ ( − j)
j=0 2 n
= ∑( )x
n!
n=1

Multiply this series by itself and compute the coefficients for x 0 1


,x ,x ,x ,x
2 3 4
in the resulting series.

 Exercise 2.2.9

Let
n−1 1
M
∏ ( − j)
j=0 2 n
S(x, M ) = ∑ ( )x (2.2.32)
n!
n=1

−−−−−
Use a computer algebra system to plot S(x, M ) for M = 5, 10, 15, 95, 100 and compare these to the graph for √1 + x . What
−−−−−
seems to be happening? For what values of x does the series appear to converge to √1 + x ?

1 −−−− −
Convinced that he had the correct series, Newton used it to find a series representation of ∫ x=0
√1 − x2 dx .

 Exercise 2.2.10
n−1 1
1 ∏j=0 ( −j)
Use the series (1 + x ) 2 =∑

n=0
(
n!
2
)x
n
to obtain the series

1
π − −−− −
=∫ √ 1 − x2 dx
4 x=0

n−1 1
∞ n
∏ ( − j) (−1)
j=0 2
= ∑( )( )
n! 2n + 1
n=0

1 1 1 5
=1− − − − −⋯
6 40 112 1152

2.2.7 https://math.libretexts.org/@go/page/7927
Use a computer algebra system to sum the first 100 terms of this series and compare the answer to π

4
.

Again, Newton had a series which could be verified (somewhat) computationally. This convinced him even further that he had the
correct series.

 Exercise 2.2.11
a. Show that
n n−1 1
1/2 ∞
−−−− − (−1 ) ∏ ( − j)
2 j=0 2
∫ √ x − x dx = ∑
– n
x=0 √2n!(2n + 3)2
n=0

and use this to show that


n n−1 1

(−1 ) ∏ ( − j)
j=0 2
π = 16 ( ∑ – ) (2.2.33)
n
√2n!(2n + 3)2
n=0

b. We now have two series for calculating π : the one from part (a) and the one derived earlier, namely
∞ n
(−1)
π = 4 (∑ )
2n + 1
n=0

n n−1 1
(−1 ) ∏ ( −j)

We will explore which one converges to π faster. With this in mind, define S1(N ) = 16 (∑ ∞
and
j=0 2
)
n=0 √2n!(2n+3) 2
n

n
(−1)
S2(N ) = 4 (∑

n=0 2n+1
) . Use a computer algebra system to compute S1(N ) and S2(N ) for N = 5, 10, 15, 20 . Which
one appears to converge to π faster?

In general the series representation


n−1

∏j=0 (α − j)
α n
(1 + x) = ∑( )x
n!
n=0

α(α − 1) α(α − 1)(α − 2)


2 3
= 1 + αx + x + x +⋯
2! 3!

is called the binomial series (or Newton’s binomial series). This series is correct when α is a non-negative integer (after all, that is
how we got the series). We can also see that it is correct when α = −1 as we obtain
n−1

∏ (−1 − j)
j=0
−1 n
(1 + x) = ∑( )x
n!
n=0

−1(−1 − 1) −1(−1 − 1)(−1 − 2)


2 3
= 1 + (−1)x + x + x +⋯
2! 3!
2 3
= 1 −x +x −x +⋯

which can be obtained from the geometric series 1−x


1
= 1 +x +x
2
+⋯ .
In fact, the binomial series is the correct series representation for all values of the exponent α (though we haven’t proved this yet).

 Exercise 2.2.12

Let k be a positive integer. Find the power series, centered at zero, for f (x) = (1 − x) −k
by
a. Differentiating the geometric series (k − 1) times.
b. Applying the binomial series.
c. Compare these two results.

Leonhard Euler was a master at exploiting power series.

2.2.8 https://math.libretexts.org/@go/page/7927
Figure 2.2.1 : Leonhard Euler
In 1735, the 28 year-old Euler won acclaim for what is now called the Basel problem: to find a closed form for ∑ . Other ∞

n=1
1
2
n

mathematicans knew that the series converged, but Euler was the first to find its exact value. The following problem essentially
provides Euler’s solution.

 Exercise 2.2.13
a. Show that the power series for sin x

x
is given by 1 − x + x − ⋯ .1

3!
2 1

5!
4

b. Use (a) to infer that the roots of 1 − x + x − ⋯ are given by x = ±π, ±2π, ±3π, ⋯
1

3!
2 1

5!
4

c. Suppose p(x) = a + a x + ⋅ ⋅ ⋅ + a x is a polynomial with roots r , r , . . . , r . Show that if a


0 1 n
n
1 2 n 0 ≠0 , then all the roots
are non-zero and
x x x
p(x) = a0 (1 − ) (1 − ) ⋯ (1 − ) (2.2.34)
r1 r2 rn

d. Assuming that the result in c holds for an infinite polynomial power series, deduce that
2 2 2
1 2
1 4
x x x
1− x + x − ⋯ = (1 − ( ) ) (1 − ( ) ) (1 − ( ) )⋯ (2.2.35)
3! 5! π 2π 3π

e. Expand this product to deduce


∞ 2
1 π
∑ = (2.2.36)
2
n 6
n=1

References
1
A few seconds of thought should convince you that the solution of this problem is y(x) = e . We will ignore this for now in favorx

of emphasizing the technique.


2
These two representations probably look the same at first. Take a moment and be sure you see where they differ. Hint: The “1” is
missing in the last expression.

This page titled 2.2: Power Series as Infinite Polynomials is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

2.2.9 https://math.libretexts.org/@go/page/7927
2.E: Calculus in the 17th and 18th Centuries (Exercises)
Q1
Use the geometric series to obtain the series
1 1
2 3
ln(1 + x) =x− x + x −⋯
2 3
∞ n
(−1)
n+1
=∑ x
n+1
n=0

Q2
Without using Taylor’s Theorem, represent the following functions as power series expanded about 0 (i.e., in the form
a x ).
∞ n
∑ n
n=0

a. ln(1 − x ) 2

b. 1+x
x
2

c. arctan(x ) 3

d. ln(2 + x) [Hint: 2 + x = 2 (1 + x

2
) ]

Q3
Let a be a positive real number. Find a power series for a expanded about 0. [Hint: a ].
x
x x ln( a )
=e

Q4

Represent the function sin x as a power series expanded about a (i.e., in the form ∑n=0 an (x − a)
n
). n=0 an (x−a)n). [Hint:
sin x = sin(a + x − a) ].

Q5
Without using Taylor’s Theorem, represent the following functions as a power series expanded about a for the given value of a (i.e.,
in the form ∑ a (x − a) .

n=0 n
n

a. ln x, a = 1
b. e , a = 3
x

c. x + 2x + 3, a = 1
3 2

d. , a = 5
1

Q6
Evaluate the following integrals as series.
1
a.
2
x
∫ e dx
x=0
1
b. ∫ x=0 1+x
1
4
dx

1 3 −−−−−
c. ∫
x=0
√1 − x3 dx

This page titled 2.E: Calculus in the 17th and 18th Centuries (Exercises) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

2.E.1 https://math.libretexts.org/@go/page/8274
CHAPTER OVERVIEW

3: Questions Concerning Power Series


3.1: Taylor’s Formula
3.2: Series Anomalies
3.E: Questions Concerning Power Series (Exercises)

This page titled 3: Questions Concerning Power Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

1
3.1: Taylor’s Formula
 Learning Objectives
Explain the Taylor formula

As we saw in the previous chapter, representing functions as power series was a fruitful strategy for mathematicans in the eighteenth
century (as it still is). Differentiating and integrating power series term by term was relatively easy, seemed to work, and led to
many applications. Furthermore, power series representations for all of the elementary functions could be obtained if one was clever
enough.
However, cleverness is an unreliable tool. Is there some systematic way to find a power series for a given function? To be sure, there
were nagging questions: If we can find a power series, how do we know that the series we’ve created represents the function we
started with? Even worse, is it possible for a function to have more than one power series representation centered at a given value
a ? This uniqueness issue is addressed by the following theorem.

 Theorem 3.1.1

If

n
f (x) = ∑ an (x − a) (3.1.1)

n=0

then
(n)
f (a)
an = (3.1.2)
n!

where f (n)
(a) represents the n derivative of f evaluated at a .
th

A few comments about Theorem 3.1.1 are in order. Notice that we did not start with a function and derive its series representation.
Instead we defined f (x) to be the series we wrote down. This assumes that the expression

n
∑ an (x − a) (3.1.3)

n=0

actually has meaning (that it converges). At this point we have every reason to expect that it does, however expectation is not proof
so we note that this is an assumption, not an established truth. Similarly, the idea that we can differentiate an infinite polynomial
term-by-term as we would a finite polynomial is also assumed. As before, we follow in the footsteps of our 18 century forebears th

in making these assumptions. For now.

 Exercise 3.1.1

Prove Theorem 3.1.1.

Hint
f (a) = a0 + a1 (a − a) + a2 (a − a)
2
+ ⋅ ⋅ ⋅ = a0 , differentiate to obtain the other terms.

From Theorem 3.1.1 we see that if we do start with the function f (x) then no matter how we obtain its power series, the result will
always be the same. The series
∞ (n) ′′ ′′′
f (a) f (a) f (a)
n ′ 2 3
∑ (x − a) = f (a) + f (a)(x − a) + (x − a) + (x − a) +⋯ (3.1.4)
n! 2! 3!
n=0

is called the Taylor series for f expanded about (centered at) a. Although this systematic “machine” for obtaining power series for a
function seems to have been known to a number of mathematicians in the early 1700’s, Brook Taylor was the first to publish this
result in his Methodus Incrementorum (1715).

3.1.1 https://math.libretexts.org/@go/page/7930
Figure 3.1.1 : Brook Taylor.
The special case when a = 0 was included by Colin Maclaurin in his Treatise of Fluxions (1742).
Thus when a = 0 , the series in Equation 3.1.4 is simplified to
∞ (n)
f (0)
n
∑ x (3.1.5)
n!
n=0

and this series is often called the Maclaurin Series for f .


The “prime notation” for the derivative was not used by Taylor, Maclaurin or their contemporaries. It was introduced by Joseph
Louis Lagrange in his 1779 work Théorie des Fonctions Analytiques. In that work, Lagrange sought to get rid of Leibniz’s
infinitesimals and base calculus on the power series idea. His idea was that by representing every function as a power series,
calculus could be done “algebraically” by manipulating power series and examining various aspects of the series representation
instead of appealing to the “controversial” notion of infinitesimals. He implicitly assumed that every continuous function could be
replaced with its power series representation.

Figure 3.1.2 : Joseph-Louis Lagrange.


That is, he wanted to think of the Taylor series as a “great big polynomial,” because polynomials are easy to work with. It was a
very simple, yet exceedingly clever and far-reaching idea. Since e = 1 + x + x /2+. . . , for example, why not just define the
x 2

exponential to be the series and work with the series. After all, the series is just a very long polynomial.
This idea did not come out of nowhere. Leonhard Euler had put exactly that idea to work to solve many problems throughout the
18
th
century. Some of his solutions are still quite breath-taking when you first see them [14].
( n)
∞ f (a)
Taking his cue from the Taylor series ∑ n=0
(x − a)
n!
Lagrange observed that the coefficient of (x − a) provides the
n n

derivative of f at a (divided by n!). Modifying the formula above to suit his purpose, Lagrange supposed that every differentiable
function could be represented as

n
f (x) = ∑ gn (a)(x − a) (3.1.6)

n=0

If we regard the parameter a as a variable then g is the derivative of f , g


1 2 = 2f
′′
and generally

3.1.2 https://math.libretexts.org/@go/page/7930
(n)
gn = n! f (3.1.7)

Lagrange dubbed his function g the “fonction dérivée” from which we get the modern name “derivative.”
1

All in all, this was a very clever and insightful idea whose only real aw is that its fundamental assumption is not true. It turns out
that not every differentiable function can be represented as a Taylor series. This was demonstrated very dramatically by Augustin
Cauchy’s famous counter-example
1

e x
2
if x ≠ 0
f (x) = { (3.1.8)
0 if x = 0

This function is actually infinitely differentiable everywhere but its Maclaurin series (that is, a Taylor series with a =0 ) does not
converge to f because all of its derivatives at the origin are equal to zero: f (0) = 0, ∀n ∈ N \) (n)

Not every differentiable function can be represented as a Taylor series.


Computing these derivatives using the definition you learned in calculus is not conceptually difficult but the formulas involved do
become complicated rather quickly. Some care must be taken to avoid error.
To begin with, let’s compute a few derivatives when x ≠ 0 .
−2
(0) x
f (x) = e

−2
(1) −3 −x
f (x) = 2x e

−2
(2) −6 −4 −x
f (x) = (4 x − 6x )e

As you can see the calculations are already getting a little complicated and we’ve only taken the second derivative. To streamline
things a bit we take y = x − 1 , and define p (x) = 4x − 6x so that
2
6 4

−2 2
(2) −1 −x −y
f (x) = p2 (x )e = p2 (y)e (3.1.9)

 Exercise 3.1.2
a. Adopting the notation y = x and f (x) = p (y)e , find p (y) in terms of p . [Note: Don’t forget that you are
2
−1 (n) −y
n n+1 n (y)

differentiating with respect to x, not y .]


b. Use induction on n to show that p (y) is a polynomial for all n ∈ N .
n

Unfortunately everything we’ve done so far only gives us the derivatives we need when x is not zero, and we need the derivatives
when x is zero. To find these we need to get back to very basic ideas.
Let’s assume for the moment that we know that f (n)
(0) = 0 and recall that
(n) (n)
f (x) − f (0)
(n+1)
f (0) = lim (3.1.10)
x→0 x −0

−2
(n+1) −1 −1 −x
f (0) = lim x pn (x )e (3.1.11)
x→0

y pn (y)
(n+1)
f (0) = lim (3.1.12)
2
y→±∞ ey

We can close the deal with the following problem.

 Exercise 3.1.3
m
y
a. Let m be a nonnegative integer. Show that lim y→±∞
y
2
=0 . [Hint: Induction and a dash of L’Hôpital’s rule should do the
e

trick.]
q(y)
b. Prove that limy→±∞
y
2
=0 for any polynomial q.
e

c. Let f (x) be as in equation 3.1.4 and show that for every nonnegative integer n , f (n)
(0) = 0 .

3.1.3 https://math.libretexts.org/@go/page/7930
This example showed that while it was fruitful to exploit Taylor series representations of various functions, basing the foundations
of calculus on power series was not a sound idea.
While Lagrange’s approach wasn’t totally successful, it was a major step away from infinitesimals and toward the modern approach.
We still use aspects of it today. For instance we still use his prime notation (f ) to denote the derivative. ′

Turning Lagrange’s idea on its head it is clear that if we know how to compute derivatives, we can use this machine to obtain a
power series when we are not “clever enough” to obtain the series in other (typically shorter) ways. For example, consider Newton’s
binomial series when α = . Originally, we obtained this series by extending the binomial theorem to non-integer exponents.
1

Taylor’s formula provides a more systematic way to obtain this series:


1

f (x) = (1 + x ) 2
; f (0) = 1 (3.1.13)


1 1
−1 ′
1
f (x) = (1 + x ) 2 ; f (0) = (3.1.14)
2 2

1 1 1 1 1
′′ −2 ′′
f (x) = ( − 1) (1 + x ) 2 ; f (0) = ( − 1) (3.1.15)
2 2 2 2

and in general since


1 1 1 1 1
(n) −n
f (x) = ( − 1) ⋯ ( − (n − 1)) (1 + x ) 2 (3.1.16)
2 2 2 2

we have
1 1 1 1
(n)
f (0) = ( − 1) ⋯ ( − (n − 1)) (3.1.17)
2 2 2 2

Using Taylor’s formula we obtain the series


1 1 1 n−1 1
∞ (n) ∞ ∞
f (0) ( − 1) ⋯ ( − (n − 1)) ∏ ( − j)
2 2 2 j=0 2
n n n
∑ x = 1 +∑ x = 1 +∑ x (3.1.18)
n! n! n!
n=0 n=1 n=1

which agrees with equation 2.2.40 in the previous chapter.

 Exercise 3.1.4

Use Taylor’s formula to obtain the general binomial series


n−1

∏ (α − j)
j=0
α n
(1 + x ) = 1 +∑ x (3.1.19)
n!
n=1

 Exercise 3.1.5
Use Taylor’s formula to obtain the Taylor series for the functions e , sin x, and cos x expanded about a . x

As you can see, Taylor’s “machine” will produce the power series for a function (if it has one), but is tedious to perform. We will
find, generally, that this tediousness can be an obstacle to understanding. In many cases it will be better to be clever if we can. This
is usually shorter. However, it is comforting to have Taylor’s formula available as a last resort.
The existence of a Taylor series is addressed (to some degree) by the following.

 Theorem 3.1.2

If f ′
,f
′′
,...,f
(n+1)
are all continuous on an interval containing a and x, then
′ ′′ (n) x
f (a) f (a) f (a) 1
2 n (n+1) n
f (x) = f (a) + (x − a) + (x − a) +⋯ + (x − a) + ∫ f (t)(x − t) dt (3.1.20)
1! 2! n! n! t=a

3.1.4 https://math.libretexts.org/@go/page/7930
Before we address the proof, notice that the n -th degree polynomial
′ ′′ (n)
f (a) f (a) f (a)
2 n
f (a) + (x − a) + (x − a) +⋯ + (x − a) (3.1.21)
1! 2! n!

resembles the Taylor series and, in fact, is called the n-th degree Taylor polynomial of f about a . Theorem 3.1.2 says that a
function can be written as the sum of this polynomial and a specific integral which we will analyze in the next chapter. We will get
the proof started and leave the formal induction proof as an exercise.
Notice that the case when n =0 is really a restatement of the Fundamental Theorem of Calculus. Specifically, the FTC says
x

t=a

f (t)dt = f (x) − f (a) which we can rewrite as
x
1 ′ 0
f (x) = f (a) + ∫ f (t)(x − t) dt (3.1.22)
0! t=a

to provide the anchor step for our induction.


To derive the case where n = 1 , we use integration by parts. If we let
′ 0
u = f (t) dv = (x − t) dt (3.1.23)

1
′′ 1
du = f (t) v=− (x − t) dt (3.1.24)
1

we obtain
x
1 1 ′ 1
1 ′′ 1
x
f (x) = f (a) + (− f (t)(x − t) ∣t=a + ∫ f (t)(x − t) dt)
0! 1 1 t=a
x
1 1 ′ 1
1 ′ 1
1 ′′ 1
= f (a) + (− f (x)(x − x ) + f (a)(x − a) + ∫ f (t)(x − t) dt)
0! 1 1 1 t=a
x
1 1
′ 1 ′′ 1
= f (a) + f (a)(x − a) + ∫ f (t)(x − t) dt
1! 1! t=a

 Exercise 3.1.6
Provide a formal induction proof for Theorem 3.1.2.

This page titled 3.1: Taylor’s Formula is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene Boman
and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

3.1.5 https://math.libretexts.org/@go/page/7930
3.2: Series Anomalies
 Learning Objectives
Explain series convergence and anomalies

Up to this point, we have been somewhat frivolous in our approach to series. This approach mirrors eighteenth century mathematicians who ingeniously
exploited calculus and series to provide mathematical and physical results which were virtually unobtainable before. Mathematicians were eager to push
these techniques as far as they could to obtain their results and they often showed good intuition regarding what was mathematically acceptable and what
was not. However, as the envelope was pushed, questions about the validity of the methods surfaced.
As an illustration consider the series expansion
1 2 3
= 1 −x +x −x +⋯ (3.2.1)
1 +x

If we substitute x = 1 into this equation, we obtain


1
= 1 −1 +1 −1 +⋯ (3.2.2)
2

If we group the terms as follows (1 − 1) + (1 − 1) + ⋅ ⋅ ⋅ , the series would equal 0. A regrouping of 1 + (−1 + 1) + (−1 + 1) + ⋅ ⋅ ⋅ provides an
answer of 1. This violation of the associative law of addition did not escape the mathematicians of the 1700’s. In his 1760 paper On Divergent Series
Euler said:
Notable enough, however are the controversies over the series 1 − 1 + 1 − 1+ etc, whose sum was given by Leibniz as , although others 1

disagree . . . Understanding of this question is to be sought in the word “sum;” this idea, if thus conceived namely, the sum of a series is said to be
that quantity to which it is brought closer as more terms of a series are taken - has relevance only for the convergent series, and we should in
general give up this idea of sum for divergent series. On the other hand, as series in analysis arise from the expansion of fractions or irrational
quantities or even of transcendentals, it will, in turn, be permissible in calculation to substitute in place of such series that quantity out of whose
development it is produced.
Even with this formal approach to series, an interesting question arises. The series for the antiderivative of 1

1+x
does converge for x = 1 while this one
does not. Specifically, taking the antiderivative of the above series, we obtain
1 2
1 3
ln(1 + x) = x − x + x −⋯ (3.2.3)
2 3

If we substitute x =1 into this series, we obtain ln 2 = 1 −


1

2
+
1

3
−⋯ . It is not hard to see that such an alternating series converges. The following
n+1
(−1)
picture shows why. In this diagram, S denotes the partial sum ln 2 = x −
n
1

2
+
1

3
−⋯ +
n
.

Figure 3.2.1 : Series convergence.


From the diagram we can see S ≤ S ≤ S ≤ ⋅ ⋅ ⋅ ≤ ⋅ ⋅ ⋅ ≤ S ≤ S ≤ S
2 4 6 5 and S −S 3 = 1 . It seems that the sequence of partial sums will
2k+1 2k
2k+1
1

converge to whatever is in the “middle.” Our diagram indicates that it is ln 2 in the middle but actually this is not obvious. Nonetheless it is interesting
that one series converges for x = 1 but the other does not.

 Exercise 3.2.1
Use the fact that
2k+1 2k+2
1 1 (−1) 1 1 (−1)
1− + −⋯ + ≤ ln 2 ≤ 1 − + −⋯ + (3.2.4)
2 3 2k 2 3 2k + 1

n+1

should be added together to approximate \(\ln 2\) to within 0.0001 without actually
∞ (−1)
to determine how many terms of the series ∑
n=0 n

computing what \\(ln 2\) is.

3.2.1 https://math.libretexts.org/@go/page/7931
There is an even more perplexing situation brought about by these examples. An infinite sum such as 1 − 1 + 1 − 1 + ⋅ ⋅ ⋅ appears to not satisfy the
associative law for addition. While a convergent series such as 1 − + − ⋯ does satisfy the associative law, it does not satisfy the commutative law.
1

2
1

In fact, it does not satisfy it rather spectacularly.


A generalization of the following result was stated and proved by Bernhard Riemann in 1854.

 theorem 3.2.1

Let a be any real number. There exists a rearrangement of the series 1 − 1

2
+
1

3
−⋯ which converges to a .

This theorem shows that a series is most decidedly not a great big sum. It follows that a power series is not a great big polynomial.
To set the stage, consider the harmonic series

1 1 1
∑ =1+ + +⋯ (3.2.5)
n 2 3
n=1

Even though the individual terms in this series converge to 0, the series still diverges (to infinity) as evidenced by the inequality
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
(1 + ) +( + ) +( + + + ) +( +⋯ + ) +⋯ > +( + ) +( + + + ) +( +⋯ + ) +⋯
2 3 4 5 6 7 8 9 16 2 4 4 8 8 8 8 16 16

1 1 1 1
= + + + +⋯
2 2 2 2

=∞

Armed with this fact, we can see why Theorem 3.2.1 is true. First note that
1 1 1 1 1 1
− − − −⋯ = − (1 + + + ⋯) = −∞ (3.2.6)
2 4 6 2 2 3

and
1 1 1 1 1
1+ + +⋯ ≥ + + +⋯ = ∞ (3.2.7)
3 5 2 4 6

This says that if we add enough terms of − − − − ⋯ we can makesuch a sum as small as we wish and if we add enough terms of
1

2
1

4
1

+ ⋯ we can make such a sum as large as we wish. This provides us with the general outline of the proof. The trick is to add just enough
1 1
1+ +
3 5

positive terms until the sum is just greater than a. Then we start to add on negative terms until the sum is just less than a. Picking up where we left off
with the positive terms, we add on just enough positive terms until we are just above a again. We then add on negative terms until we are below a. In
essence, we are bouncing back and forth around a . If we do this carefully, then we can get this rearrangement to converge to a . The notation in the proof
below gets a bit hairy, but keep this general idea in mind as you read through it.
Let O be the first odd integer such that 1 +
1
1

3
+
1

5
+⋯ +
O1
1
>a . Now choose E to be the first even integer such that
1

1 1 1 1 1 1 1
− − − −⋯ − < a − (1 + + +⋯ + ) (3.2.8)
2 4 6 E1 3 5 O1

Thus
1 1 1 1 1 1 1
1+ + +⋯ + − − − −⋯ − <a (3.2.9)
3 5 O1 2 4 6 E1

Notice that we still have 1

O1 +2
+
1

O1 +4
+⋯ = ∞ . With this in mind, choose O to be the first odd integer with2

1 1 1 1 1 1 1 1 1 1
+ +⋯ + > a − (1 + + +⋯ + − − − −⋯ − ) (3.2.10)
O1 + 2 O1 + 4 O2 3 5 O1 2 4 6 E1

Thus we have
1 1 1 1 1 1 1 1 1 1
a <1+ + +⋯ + − − − −⋯ − + + +⋯ + (3.2.11)
3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2

Furthermore, since
1 1 1 1 1 1 1 1 1 1
1+ + +⋯ + − − − −⋯ − + + +⋯ + <a (3.2.12)
3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2 − 2

we have
∣ 1 1 1 1 1 1 1 1 1 1 ∣ 1
∣1 + + +⋯ + − − − −⋯ − + + +⋯ + − a∣ < (3.2.13)
∣ 3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2 ∣ O2

In a similar fashion choose E to be the first even integer such that


2

3.2.2 https://math.libretexts.org/@go/page/7931
1 1 1 1 1 1 1 1 1 1 1 1
1+ + +⋯ + − − − −⋯ − + + +⋯ + − − −⋯ (3.2.14)
3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2 E1 + 2 E1 + 4

1
− <a
E2

Since
1 1 1 1 1 1 1 1 1 1 1 1
1+ + +⋯ + − − − −⋯ − + + +⋯ + − − −⋯ (3.2.15)
3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2 E1 + 2 E1 + 4

1
− >a
E2 − 2

then
∣ 1 1 1 1 1 1 1 1 1 1 1 1
∣1 + + +⋯ + − − − −⋯ − + + +⋯ + +⋯ − − (3.2.16)
∣ 3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2 E1 + 2 E1 + 4

1 ∣ 1
−⋯ − − a∣ <
E2 ∣ E2

Again choose O to be the first odd integer such that


3

1 1 1 1 1 1 1 1 1 1 1
a <1+ + +⋯ + − − − −⋯ − + + +⋯ + +⋯ − (3.2.17)
3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2 E1 + 2

1 1 1 1 1
− −⋯ − + + +⋯ +
E1 + 4 E2 O2 + 2 O2 + 4 O3

and notice that


∣ 1 1 1 1 1 1 1 1 1 1 1 1
∣1 + + +⋯ + − − − −⋯ − + + +⋯ + − − −⋯ (3.2.18)
∣ 3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2 E1 + 2 E1 + 4

1 1 1 1 ∣ 1
− + + +⋯ + − a∣ <
E2 O2 + 2 O2 + 4 O3 ∣ O3

Continue defining O and E in this fashion. Since lim


k k k→∞
Ok
1
= limk→∞
Ek
1
=0 , it is evident that the partial sums

1 1 1 1 1 1 1 1 1 1 1
1+ + +⋯ + − − − −⋯ − + + +⋯ + +⋯ − (3.2.19)
3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2 Ek−2 + 2

1 1 1 1 1
− −⋯ − + + +⋯ +
Ek−2 + 4 Ek−1 Ok−1 + 2 Ok−1 + 4 Ok

and
1 1 1 1 1 1 1 1 1 1
1+ + +⋯ + − − − −⋯ − + + +⋯ + +⋯ (3.2.20)
3 5 O1 2 4 6 E1 O1 + 2 O1 + 4 O2

is trapped between two such extreme partial sums. This forces the entire rearranged series to converge to a .
The next two problems are similar to the above, but notationally are easier since we don’t need to worry about converging to an actual number. We only
need to make the rearrangement grow (or shrink) without bound.

 Exercise 3.2.2

Show that there is a rearrangement of 1 − 1

2
+
1

3

1

4
+⋯ which diverges to ∞.

 Exercise 3.2.3

Show that there is a rearrangement of 1 − 1

2
+
1

3

1

4
+⋯ which diverges to −∞ .

It is fun to know that we can rearrange some series to make them add up to anything you like but there is a more fundamental idea at play here. That the
negative terms of the alternating Harmonic Series diverge to negative infinity and the positive terms diverge to positive infinity make the convergence of
the alternating series very special.
Consider, first we add 1 : This is one of the positive terms so our sum is starting to increase without bound. Next we add −1/2 which is one of the
negative terms so our sum has turned around and is now starting to decrease without bound. Then another positive term is added: increasing without
bound. Then another negative term: decreasing. And so on. The convergence of the alternating Harmonic Series is the result of a delicate balance
between a tendency to run off to positive infinity and back to negative infinity. When viewed in this light it is not really too surprising that rearranging
the terms can destroy this delicate balance.
Naturally, the alternating Harmonic Series is not the only such series. Any such series is said to converge “conditionally” – the condition being the
specific arrangement of the terms.

3.2.3 https://math.libretexts.org/@go/page/7931
To stir the pot a bit more, some series do satisfy the commutative property. More specifically, one can show that any rearrangement of the series
1 ln(1+x)
1−
1
2
+
1
2
−⋯ must converge to the same value as the original series (which happens to be ∫x=0 x
dx ≈ 0.8224670334). Why does one series
2 3

behave so nicely whereas the other does not?


Issues such as these and, more generally, the validity of using the infinitely small and infinitely large certainly existed in the 1700’s, but they were
overshadowed by the utility of the calculus. Indeed, foundational questions raised by the above examples, while certainly interesting and of importance,
did not significantly deter the exploitation of calculus in studying physical phenomena. However, the envelope eventually was pushed to the point that
not even the most practically oriented mathematician could avoid the foundational issues.

This page titled 3.2: Series Anomalies is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene Boman and Robert Rogers
(OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

3.2.4 https://math.libretexts.org/@go/page/7931
3.E: Questions Concerning Power Series (Exercises)
Q1
Use Taylor’s formula to find the Taylor series of the given function expanded about the given point a .
a. f (x) = ln(1 + x), a = 0
b. f (x) = e , a = −1
x

c. f (x) = x + x + x + 1,
3 2
a =0

d. f (x) = x + x + x + 1,
3 2
a =1

This page titled 3.E: Questions Concerning Power Series (Exercises) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

3.E.1 https://math.libretexts.org/@go/page/7932
CHAPTER OVERVIEW

4: Convergence of Sequences and Series


4.1: Sequences of Real Numbers
4.2: The Limit as a Primary Tool
4.3: Divergence of a Series
4.E: Convergence of Sequences and Series (Exercises)

Thumbnail: Leonhard Euler. (Public Domain; Jakob Emanuel Handmann).

This page titled 4: Convergence of Sequences and Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

1
4.1: Sequences of Real Numbers
 Learning Objectives
Explain the sequences of real numbers

In Chapter 2, we developed the equation 1 + x + x + x + ⋯ = , and we mentioned there were limitations to this power
2 3
1−x
1

series representation. For example, substituting x = 1 and x = −1 into this expression leads to
1 1
1 +1 +1 +⋯ = and 1 − 1 + 1 − 1 + ⋯ = (4.1.1)
0 2

which are rather hard to accept. On the other hand, if we substitute x =


1

2
into the expression we get
2 3
1 +(
1

2
)+(
1

2
) +( which seems more palatable until we think about it. We can add two numbers together by the
1

2
) +⋯ = 2

method we all learned in elementary school. Or three. Or any finite set of numbers, at least in principle. But infinitely many? What
does that even mean? Before we can add infinitely many numbers together we must find a way to give meaning to the idea.
To do this, we examine an infinite sum by thinking of it as a sequence of finite partial sums. In our example, we would have the
following sequence of partial sums.
2 3 n j
1 1 1 1 1 1
(1, 1 + ,1+ +( ) ,1+ +( ) ,⋯,∑( ) ) (4.1.2)
2 2 2 2 2 2
j=0

We can plot these sums on a number line to see what they tend toward as n gets large.

Figure 4.1.1 : Number line plot.


Since each partial sum is located at the midpoint between the previous partial sum and 2, it is reasonable to suppose that these sums
j
tend to the number 2. Indeed, you probably have seen an expression such as limn→∞ (∑
n

j=0
(
1

2
) ) =2 justified by a similar
argument. Of course, the reliance on such pictures and words is fine if we are satisfied with intuition. However, we must be able to
make these intuitions rigorous without relying on pictures or nebulous words such as “approaches.”
No doubt you are wondering “What’s wrong with the word ‘approaches’? It seems clear enough to me.” This is often a sticking
point. But if we think carefully about what we mean by the word “approach” we see that there is an implicit assumption that will
cause us some difficulties later if we don’t expose it.
To see this consider the sequence (1, , , , ⋯). Clearly it “approaches”zero, right? But, doesn’t it also “approach” −1? It does,
1

2
1

3
1

in the sense that each term gets closer to −1 than the one previous. But it also “approaches” −2, −3, or even −1000 in the same
sense. That’s the problem with the word “approaches.” It just says that we’re getting closer to something than we were in the
previous step. It does not tell us that we are actually getting close. Since the moon moves in an elliptical orbit about the earth for
part of each month it is “approaching” the earth. The moon gets closer to the earth but, thankfully, it does not get close to the earth.

The implicit assumption we alluded to earlier is this: When we say that the sequence ( ) “approaches” zero we mean that it is
1

n n=1

getting close not closer. Ordinarily this kind of vagueness in our language is pretty innocuous. When we say “approaches” in
casual conversation we can usually tell from the context of the conversation whether we mean “getting close to” or “getting closer
to.” But when speaking mathematically we need to be more careful, more explicit, in the language we use.
So how can we change the language we use so that this ambiguity is eliminated? Let’s start out by recognizing, rigorously, what we
mean when we say that a sequence converges to zero. For example, you would probably want to say that the sequence

(1,
1

2
, ,
1 1
, ⋯) = (
3 4
) converges to zero. Is there a way to give this meaning without relying on pictures or intuition?
1

n n=1

One way would be to say that we can make as close to zero as we wish, provided we make n large enough. But even this needs
1

to be made more specific. For example, we can get to within a distance of 0.1 of 0 provided we make n > 10 , we can get to
1

n
1

4.1.1 https://math.libretexts.org/@go/page/7937
within a distance of 0.01 of 0 provided we make n > 100 , etc. After a few such examples it is apparent that given any arbitrary
distance ε > 0 , we can get to within ε of 0 provided we make n > . This leads to the following definition.
1

n
1

 Definition 4.1.1

Let (s ) = (s , s , s , . . . ) be a sequence of real numbers. We say that (s ) converges to


n 1 2 3 n 0 and write limn→∞ sn = 0

provided for any ε > 0 , there is a real number N such that if n > N , then |s | < ε . n

 Notes on Definition 4.1.1:


1. This definition is the formal version of the idea we just talked about; that is, given an arbitrary distance ε , we must be able
to find a specific number N such that s is within ε of 0, whenever n > N . The N is the answer to the question of how
n

large is “large enough” to put s this close to 0.


n

2. Even though we didn’t need it in the example ( ), the absolute value appears in the definition because we need to make
1

the distance from s to 0 smaller than ε . Without the absolute value in the definition, we would be able to “prove” such
n

outrageous statements as lim −n = 0 , which we obviously don’t want.


n→∞

3. The statement |sn| < ε can also be written as −ε < sn < ε or s ∈ (−ε, ε) . (See the exercise 4.1.1 below.) Any one of
n

these equivalent formulations can be used to prove convergence. Depending on the application, one of these may be more
advantageous to use than the others.
4. Any time an N can be found that works for a particular ε , any number M > N will work for that ε as well, since if
n > M then n > N .

 Exercise 4.1.1

Let a and b be real numbers with b > 0 . Prove |a| < b if and only if −b < a < b . Notice that this can be extended to |a| ≤ b if
and only if −b ≤ a ≤ b .

To illustrate how this definition makes the above ideas rigorous, let’s use it to prove that lim n→∞ (
1

n
) =0 .

 proof:

Let ε > 0 be given. Let N =


1

ε
. If n > N , then n > 1

ε
and so ∣∣ 1

n

∣ =
1

n
<ε . Hence by definition, lim n→∞ n
1
=0 .

Notice that this proof is rigorous and makes no reference to vague notions such as “getting smaller” or “approaching infinity.” It
has three components:
1. provide the challenge of a distance ε > 0
2. identify a real number N
3. show that this N works for this given ε .
There is also no explanation about where N came from. While it is true that this choice of N is not surprising in light of the
“scrapwork” we did before the definition, the motivation for how we got it is not in the formal proof nor is it required. In fact, such
scrapwork is typically not included in a formal proof. For example, consider the following.

 Example 4.1.1:

Use the definition of convergence to zero to prove lim n→∞


sin

n
=0 .
Proof:
Let ε > 0 . Let N =
1

ε
. If n > N , then n > 1

ε
and 1

n
<ε . Thus ∣∣ sin n

n
∣ ≤

1

n
<ε . Hence by definition, lim n→∞
sin

n
=0 .

Notice that the N came out of nowhere, but you can probably see the thought process that went into this choice: we needed to use
the inequality | sin n| ≤ 1 . Again this scrapwork is not part of the formal proof, but it is typically necessary for finding what N
should be. You might be able to do the next problem without doing any scrapwork first, but don’t hesitate to do scrapwork if you
need it.

4.1.2 https://math.libretexts.org/@go/page/7937
 Exercise 4.1.2

Use the definition of convergence to zero to prove the following.


a. limn→∞
1

n2
=0

b. lim n→∞
1

√n
=0

As the sequences get more complicated, doing scrapwork ahead of time will become more necessary.

 Example 4.1.2:

Use the definition of convergence to zero to prove lim n→∞


n+4
2
n +1
=0 .

Scrapwork:
n+4 n+4 n+4
Given an ε >0 , we need to see how large to make n in order to guarantee that ∣
∣ 2
n +1

∣ <ε . First notice that ( 2
n +1
<
n
2
.
Also, notice that if n >4 , then n + 4 < n + n = 2n . So as long as n >4 , we have n+4

n2 +1
<
n+4

n2
<
2n

n2
<
2

n
. We can make
this less than ε if we make n > . This means we need to make n > 4 and n > , simultaneously. These can be done if we
2

ε
2

let N be the maximum of these two numbers. This sort of thing comes up regularly, so the notation N = max (4, ) was 2

developed to mean the maximum of these two numbers. Notice that if N = max (4, ) then N ≥ 4 and N ≥ . We’re now 2

ε
2

ready for the formal proof.


Proof:
Let ε > 0 . Let N = max (4,
2

ε
) . If n > N , then n > 4 and n > 2

ε
. Thus we have n > 4 and 2

n
<ε . Therefore
∣ n+4 ∣ n+4 n+4 2n 2
∣ ∣ = < < = <ε (4.1.3)
2
∣ n +1 ∣ 2 2 2
n +1 n n n

Hence by definition, lim n→∞


n+4

n +1
2
=0 .

Again we emphasize that the scrapwork is NOT part of the formal proof and the reader will not see it. However, if you look
carefully, you can see the scrapwork in the formal proof.

 Exercise 4.1.3
2
n +4n+1
Use the definition of convergence to zero to prove lim n→∞
n
3
=0 .

 Exercise 4.1.4

Let b be a nonzero real number with |b| < 1 and let ε > 0 .
n
a. Solve the inequality |b| < ε for n .
b. Use part (a) to prove lim b =0.
n→∞
n

We can negate this definition to prove that a particular sequence does not converge to zero.

 Example 4.1.3:

Use the definition to prove that the sequence (1 + (−1) n ∞


)
n=0
= (2, 0, 2, 0, 2, ⋯) does not converge to zero.
Before we provide this proof, let’s analyze what it means for a sequence (s ) to not converge to zero. Converging to zero
n

means that any time a distance ε > 0 is given, we must be able to respond with a number N such that |s | < ε for every n

n > N . To have this not happen, we must be able to find some ε > 0 such that no choice of N will work. Of course, if we find

such an ε , then any smaller one will fail to have such an N , but we only need one to mess us up. If you stare at the example
long enough, you see that any ε with 0 < ε ≤ 2 will cause problems. For our purposes, we will let ε = 2 .
Proof:

4.1.3 https://math.libretexts.org/@go/page/7937
Let ε =2 and let N ∈ N be any integer. If we let k be any nonnegative integer with k > , then n = 2k > N , but N

|1 + (−1 ) | = 2 . Thus no choice of N will satisfy the conditions of the definition for this ε , (namely that |1 + (−1 ) | < 2
n n

for all n > N ) and so lim (1 + (−1 ) ) ≠ 0 .


n→∞
n

 Exercise 4.1.5

Negate the definition of lim n→∞ sn = 0 to provide a formal definition for lim n→∞ sn ≠ 0 .

 Exercise 4.1.6

Use the definition to prove lim n→∞


n

n+100
≠0 .

Now that we have a handle on how to rigorously prove that a sequence converges to zero, let’s generalize this to a formal definition
for a sequence converging to something else. Basically, we want to say that a sequence (s ) converges to a real number s , provided
n

the difference (s − s) converges to zero. This leads to the following definition:


n

 Definition 4.1.2

Let (s ) = (s
n 1, be a sequence of real numbers and let s be a real number. We say that (s ) converges to
s2 , s3 , . . . ) n s and
write lim n→∞ sn = s provided for any ε > 0 , there is a real number N such that if n > N , then |s − s| < ε . n

 Notes on DEfinition 4.1.2


1. Clearly lim s = s if and only if lim
n→∞ n (s − s) = 0 . n→∞ n

2. Again notice that this says that we can make s as close to s as we wish (within ε ) by making n large enough (> N ). As
n

before, this definition makes these notions very specific.


3. Notice that |s − s| < ε can be written in the following equivalent forms
n

a. | sn − s| < ε

b. −ε < sn − s < ε

c. s − ε < sn < s + ε

d. sn ∈ (s − ε, s + ε)

and we are free to use any one of these which is convenient at the time.

As an example, let’s use this definition to prove that the sequence in Problem 4.1.6, in fact, converges to 1.

 Example 4.1.4:

Prove lim n→∞


n

n+100
=1 .
Scrapwork:
Given an ε > 0 , we need to get ∣∣ n

n+100
− 1∣
∣ <ε . This prompts us to do some algebra.

∣ n ∣ ∣ n − (n + 100) ∣ 100
∣ − 1∣ = ∣ − 1∣ ≤ (4.1.4)
∣ n + 100 ∣ ∣ n + 100 ∣ n

This in turn, seems to suggest that N =


100

ε
should work.
Proof:
Let ε > 0 . Let N =
100

ε
. If n > N , then n > 100

ε
and so 100

n
<ε . Hence

∣ n ∣ ∣ n − (n + 100) ∣ 100 100


∣ − 1∣ = ∣ − 1∣ = < <ε (4.1.5)
∣ n + 100 ∣ ∣ n + 100 ∣ n + 100 n

Thus by definition lim n→∞


n

n+100
=1

4.1.4 https://math.libretexts.org/@go/page/7937
Notice again that the scrapwork is not part of the formal proof and the author of a proof is not obligated to tell where the choice of
N came from (although the thought process can usually be seen in the formal proof). The formal proof contains only the requisite

three parts: provide the challenge of an arbitrary ε > 0 , provide a specific N , and show that this N works for the given ε .
Also notice that given a specific sequence such as , the definition does not indicate what the limit would be if, in fact, it
n

n+100

exists. Once an educated guess is made as to what the limit should be, the definition only verifies that this intuition is correct.
This leads to the following question: If intuition is needed to determine what a limit of a sequence should be, then what is the
purpose of this relatively non-intuitive, complicated definition?
Remember that when these rigorous formulations were developed, intuitive notions of convergence were already in place and had
been used with great success. This definition was developed to address the foundational issues. Could our intuitions be verified in a
concrete fashion that was above reproach? This was the purpose of this non-intuitive definition. It was to be used to verify that our
intuition was, in fact, correct and do so in a very prescribed manner. For example, if b > 0 is a fixed number, then you would
1

probably say as n approaches infinity, b(


n approaches b = 1 . After all, we did already prove that lim
) 0
= 0 . We should be
n→∞
1

able to back up this intuition with our rigorous definition.

 Exercise 4.1.7
1

Let . Use the definition to prove


b >0 limn→∞ b
(
n
)
=1 . [Hint: You will probably need to separate this into two cases:
0 < b < 1 and b ≥ 1 .]

 Exercise 4.1.8
a. Provide a rigorous definition for lim s ≠s.
n→∞ n

b. Use your definition to show that for any real number a , lim ((−1 ) ) ≠ a . [Hint: Choose ε = 1 and use the fact that
n→∞
n

|a − (−1 ) | < 1 is equivalent to (−1 ) − 1 < a < (−1 ) + 1 to show that no choice of N will work for this ε .]
n n n

This page titled 4.1: Sequences of Real Numbers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

4.1.5 https://math.libretexts.org/@go/page/7937
4.2: The Limit as a Primary Tool
 Learning Objectives

General rules about limits


Understanding convergence

As you’ve seen from the previous sections, the formal definition of the convergence of a sequence is meant to capture rigorously
our intuitive understanding of convergence. However, the definition itself is an unwieldy tool. If only there was a way to be
rigorous without having to run back to the definition each time. Fortunately, there is a way. If we can use the definition to prove
some general rules about limits then we could use these rules whenever they applied and be assured that everything was still
rigorous. A number of these should look familiar from calculus.

 Exercise 4.2.1

Let (c) ∞
n=1
= (c, c, c, ⋯) be a constant sequence. Show that lim n→∞ c =c .

In proving the familiar limit theorems, the following will prove to be a very useful tool.

 lemma 4.2.1

a. Triangle Inequality Let a and b be real numbers. Then


|a + b| ≤ |a| + |b| (4.2.1)

b. Reverse Triangle Inequality Let a and b be real numbers. Then


|a| − |b| ≤ |a − b| (4.2.2)

 Exercise 4.2.2
a. Prove Lemma 4.2.1. [ Hint: For the Reverse Triangle Inequality, consider |a| = |a − b + b| .]
b. Show ||a| − |b|| ≤ |a − b| . [ Hint: You want to show |a| − |b| ≤ |a − b| and − (|a| − |b|) ≤ |a − b| .]

 theorem 4.2.1
If limn→∞ an = a and lim n→∞ bn = b , then lim n→∞ (an + bn ) = a + b .

We will often informally state this theorem as “the limit of a sum is the sum of the limits.” However, to be absolutely precise, what
it says is that if we already know that two sequences converge, then the sequence formed by summing the corresponding terms of
those two sequences will converge and, in fact, converge to the sum of those individual limits. We’ll provide the scrapwork for the
proof of this and leave the formal write-up as an exercise. Note the use of the triangle inequality in the proof.
Scrapwork:
If we let ε >0, then we want N so that if n > N , then |(a + b ) − (a + b)| < ε . We know that lim
n n a =a and n→∞ n

limn→∞ bn = b ,so we can make |a − a| and |a − a| as small as we wish, provided we make n large enough. Let’s go back to
n n

what we want, to see if we can close the gap between what we know and what we want. We have

|(an + bn ) − (a + b)| = |(an − a) + (bn − b)| ≤ | an − a| + | bn − b| (4.2.3)

by the triangle inequality. To make this whole thing less than ε , it makes sense to make each part less than . Fortunately, we can
ε

do that as the definitions of lim a = a and lim


n→∞ n b = b allow us to make | a − a| and | a − a| arbitrarily small.
n→∞ n n n

Specifically, since lim n→∞a = a , there exists an N


n such that if n > N then |a − a| < . Also since lim
1 1 n
ε

2
b = b , there
n→∞ n

exists an N such that if n > N then |b − b| < . Since we want both of these to occur, it makes sense to let
2 2 n
ε

N = max(N , N ) . This should be the N that we seek.


1 2

4.2.1 https://math.libretexts.org/@go/page/7938
 Exercise 4.2.3

Prove Theorem 4.2.1.

 theorem 4.2.2
If lim n→∞ an = a and lim n→∞ bn = b , then limn→∞ (an bn ) = ab .

Scrapwork:
Given ε > 0 , we want N so that if n > N , then |a b − ab| < ε . One of the standard tricks in analysis is to “uncancel.” In this
n n

case we will subtract and add a convenient term. Normally these would “cancel out,” which is why we say that we will uncancel to
put them back in. You already saw an example of this in proving the Reverse Triangle Inequality. In the present case, consider

| an bn − ab| = | an bn − an b + an b − ab|

≤ | an bn − an b| + | an b − ab|

= | an | | bn − b| + |b| | an − a|

We can make this whole thing less than ε , provided we make each term in the sum less than . We can make |b| |a − a| < if ε

2
n
ε

we make |a − a| <
n . But wait! What if b = 0 ? We could handle this as a separate case or we can do the following “slick
ε

2|b|

trick.” Notice that we can add one more line to the above string of inequalities:
| an | | bn − b| + |b| | an − a| < | an | | bn − b| + (|b| + 1) | an − a| . Now we can make |a − a| < n
ε

2(|b|+1)
and not worry about
dividing by zero.
Making an |a n| | bn − b| <
ε

2
requires a bit more finesse. At first glance, one would be tempted to try and make | bn − b| <
ε

2| an |
.
Even if we ignore the fact that we could be dividing by zero (which we could handle), we have a bigger problem. According to the
definition of lim b = b , we can make | b − b| smaller than any given fixed positive number, as long as we make n large
n→∞ n n

enough (larger than some N which goes with a given epsilon). Unfortunately, \[(frac{\varepsilon}{2\left | a_n \right |}\) is not fixed
as it has the variable n in it; there is no reason to believe that a single N will work with all of these simultaneously. To handle this
impasse, we need the following

 Lemma 4.2.2: A convergent sequence is bounded

If lim n→∞ an = a , then there exists B > 0 such that |a n


| ≤ B for all n .

End of Scrapwork.

 Exercise 4.2.4

Prove Lemma . [ Hint: We know that there exists N such that if n > N , then |a − a| < 1 . Let
4.2.2 n

, where ⌈N ⌉ represents the smallest integer greater than or equal to N . Also, notice
B = max (| a1 |, | a2 |, . . . , | a⌈N ⌉ |, |a| + 1)

that this is not a convergence proof so it is not safe to think of N as a large number.]

Armed with this bound B , we can add on one more inequality to the above scrapwork to get
| an ⋅ bn − a ⋅ b| = | an ⋅ bn − an ⋅ b + an ⋅ b − a ⋅ b|

≤ | an ⋅ bn − an ⋅ b| + | an ⋅ b − a ⋅ b|

= | an | | bn − b| + |b| | an − a|

< B | bn − b| + (|b| + 1) | an − a|

At this point, we should be able to make the last line of this less than ε .

End of Scrapwork.

4.2.2 https://math.libretexts.org/@go/page/7938
 Exercise 4.2.5

Prove Theorem 4.2.2.

 Corollary to Theorem 4.2.2

If lim n→∞ an = a and c ∈ R then lim n→∞ c ⋅ an = c ⋅ a

 Exercise 4.2.6

Prove the above corollary to Theorem 4.2.2

Just as Theorem 4.2.2 says that the limit of a product is the product of the limits, we can prove the analogue for quotients.

 Theorem 4.2.3
an
Suppose lim n→∞ an = a and lim n→∞ bn = b . Also suppose b ≠ 0 and b n ≠ 0, ∀n . Then lim n→∞ (
bn
) =
a

b
.

Scrapwork:

To prove this, let’s look at the special case of trying to prove limn→∞ (
1

bn
) =
1

b
. The general case will follow from this and
|b−bn |
Theorem 4.2.2. Consider ∣∣ bn
1

1

b
∣ =
∣ | bn ||b|
. We are faced with the same dilemma as before; we need to get ∣

1

bn


bounded above.
This means we need to get |b n| bounded away from zero (at least for large enough n ).
|b|
This can be done as follows. Since b ≠0 , then 2
>0 . Thus, by the definition of limn→∞ bn = b , there exists N1 such that if
|b| |b|
n > N1 , then |b| − | bn | ≤ |b − bn | <
2
. Thus when n > N1 , then 2
< | bn | and so 1

| bn |
<
2

b
. This says that for n > N1 ,
|b−bn |

| bn ||b|
<
2
2
|b − bn | . We should be able to make this smaller than a given ε > 0 , provided we make n large enough.
|b|

End of Scrapwork.

 Exercise 4.2.7

Prove Theorem 4.2.3

These theorems allow us to compute limits of complicated sequences and rigorously verify that these are, in fact, the correct limits
without resorting to the definition of a limit.

 Exercise 4.2.8

Identify all of the theorems implicitly used to show that


3 100 1
3 n (3 − + )
3n − 100n + 1 n
2
n
3
3
lim = lim = (4.2.4)
3 2
n→∞ 5n + 4n −7 n→∞
3 4 7 5
n (5 + − 3
)
n n

Notice that this presumes that all of the individual limits exist. This will become evident as the limit is decomposed.

There is one more tool that will prove to be valuable.

 theorem 4.2.4

Let (rn ) , (sn ) , and (tn ) be sequences of real numbers with rn ≤ sn ≤ tn , ∀ positive integers n . Suppose
limn→∞ rn = s = limn→∞ tn . Then (s ) must converge and lim
n s n→∞ n =s .

4.2.3 https://math.libretexts.org/@go/page/7938
 Exercise 4.2.9
Prove Theorem 4.2.4 . [Hint: This is probably a place where you would want to use s − ε < sn < s + ε instead of
| s − s| < ε .]
n

The Squeeze Theorem holds even if r ≤ s ≤ t holds for only sufficiently large n ; i.e., for n larger than some fixed N . This is
n n n 0

true because when you find an N that works in the original proof, this can be modified by choosing N = max(N , N ) . Also
1 0 1

note that this theorem really says two things: (s ) converges and it converges to s . This subtle point affects how one should
n

properly use the Squeeze Theorem.

 Example 4.2.1:

Prove lim n→∞


n+1

n
2
=0 .
Proof:
n+1 n+n
Notice that 0 ≤ n
2

n
2
=
2

n
.

Since limn→∞ 0 = 0 = limn→∞


2

n
, then by the Squeeze Theorem, lim n→∞
n+1

n
2
=0 .

Notice that this proof is completely rigorous. Also notice that this is the proper way to use the Squeeze Theorem. Here is an
example of an improper use of the Squeeze Theorem.
How not to prove Example 4.2.1. Notice that
n+1 n+n 2
0 ≤ ≤ = (4.2.5)
n2 n2 n

so
n+1 2
0 = lim 0 ≤ lim ≤ lim =0 (4.2.6)
n→∞ n→∞ 2 n→∞
n n

and
n+1
lim =0 (4.2.7)
n→∞ 2
n

This is incorrect in form because it presumes that lim exists, which we don’t yet know. If we knew that the limit existed
n→∞
n+1

n
2

to begin with, then this would be fine. The Squeeze Theorem proves that the limit does in fact exist, but it must be so stated.
These general theorems will allow us to rigorously explore convergence of power series in the next chapter without having to
appeal directly to the definition of convergence. However, you should remember that we used the definition to prove these results
and there will be times when we will need to apply the definition directly. However, before we go into that, let’s examine
divergence a bit more closely.

This page titled 4.2: The Limit as a Primary Tool is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

4.2.4 https://math.libretexts.org/@go/page/7938
4.3: Divergence of a Series
 Learning Objectives
Explain divergence

In Theorem 3.2.1 we saw that there is a rearrangment of the alternating Harmonic series which diverges to ∞ or −∞ . In that
section we did not fuss over any formal notions of divergence. We assumed instead that you are already familiar with the concept
of divergence, probably from taking calculus in the past.
However we are now in the process of building precise, formal definitions for the concepts we will be using so we define the
divergence of a sequence as follows.

 Definition 4.3.1
A sequence of real numbers (s ∞
n )n=1 diverges if it does not converge to any a ∈ R .

It may seem unnecessarily pedantic of us to insist on formally stating such an obvious definition. After all “converge” and
“diverge” are opposites in ordinary English. Why wouldn’t they be mathematically opposite too? Why do we have to go to the
trouble of formally defining both of them? Since they are opposites defining one implicitly defines the other doesn’t it?
One way to answer that criticism is to state that in mathematics we always work from precisely stated definitions and tightly
reasoned logical arguments.
But this is just more pedantry. It is a way of saying, “Because we said so” all dressed up in imposing language. We need to do
better than that.
One reason for providing formal definitions of both convergence and divergence is that in mathematics we frequently co-opt words
from natural languages like English and imbue them with mathematical meaning that is only tangentially related to the original
English definition. When we take two such words which happen to be opposites in English and give them mathematical meanings
which are not opposites it can be very confusing, especially at first.
This is what happened with the words “open” and “closed.” These are opposites in English: “not open” is “closed,” “not closed” is
“open,” and there is nothing which is both open and closed. But recall that an open interval on the real line, (a, b), is one that does
not include either of its endpoints while a closed interval, [a, b], is one that includes both of them.
These may seem like opposites at first but they are not. To see this observe that the interval (a, b] is neither open nor closed since it
only contains one of its endpoints. 1 If “open” and “closed” were mathematically opposite then every interval would be either open
or closed.
Mathematicians have learned to be extremely careful about this sort of thing. In the case of convergence and divergence of a series,
even though these words are actually opposites mathematically (every sequence either converges or diverges and no sequence
converges and diverges) it is better to say this explicitly so there can be no confusion.
A sequence (a ) n can only converge to a real number, a, in one way: by getting arbitrarily close to a. However there are several

n=1

ways a sequence might diverge.

 Example 4.3.1:

Consider the sequence, (n) . This clearly diverges by getting larger and larger ... Ooops! Let’s be careful. The sequence

n=1

(1 −
1
)
n n=1
gets larger and larger too, but it converges. What we meant to say was that the terms of the sequence (n) ∞
n=1

become arbitrarily large as n increases.


This is clearly a divergent sequence but it may not be clear how to prove this formally. Here’s one way.
To show divergence we must show that the sequence satisfies the negation of the definition of convergence. That is, we must
show that for every r ∈ R there is an ε > 0 such that for every N ∈ R , there is an n > N with |n − r| ≥ ε .
So let ε = 1 , and let r ∈ R be given. Let N = r+2 . Then for every n > N |n − r| > |(r + 2) − r| = 2 > 1 . Therefore the
sequence diverges.

4.3.1 https://math.libretexts.org/@go/page/7939
This seems to have been rather more work than we should have to do for such a simple problem. Here’s another way which
highlights this particular type of divergence.
First we’ll need a new definition:

 Definition 4.3.2

A sequence, (an )

n=1
, diverges to positive infinity if for every real number r , there is a real number N such that
n > N ⇒ an > r .
A sequence, (an )

n=1
, diverges to negative infinity if for every real number r , there is a real number N such that
n > N ⇒ an < r .?
A sequence is said to diverge to infinity if it diverges to either positive or negative infinity.

In practice we want to think of |r| as a very large number. This definition says that a sequence diverges to infinity if it becomes
arbitrarily large as n increases, and similarly for divergence to negative infinity.

 Exercise 4.3.1

Show that (n) ∞


n=1
diverges to infinity.

 Exercise 4.3.2

Show that if (a ∞
n )n=1 diverges to infinity then (a ∞
n )n=1 diverges.

We will denote divergence to infinity as


lim an = ±∞ (4.3.1)
n→∞

However, strictly speaking this is an abuse of notation since the symbol ∞ does not represent a real number. This notation can be
very problematic since it looks so much like the notation we use to denote convergence: lim a = a. n→∞ n

Nevertheless, the notation is appropriate because divergence to infinity is “nice” divergence in the sense that it shares many of the
properties of convergence, as the next problem shows.

 Exercise 4.3.3

Suppose lim n→∞ an = ∞ and lim n→∞ bn = ∞ .


a. Show that lim n→∞ an + bn = ∞

b. Show that lim n→∞ an bn = ∞


an
c. Is it true that lim n→∞
bn
=∞ ? Explain.

Because divergence to positive or negative infinity shares some of the properties of convergence it is easy to get careless with it.
Remember that even though we write lim a = ∞ this is still a divergent sequence in the sense that lim
n→∞ n a does not exist.
n→∞ n

The symbol ∞ does not represent a real number. This is just a convenient notational shorthand telling us that the sequence diverges
by becoming arbitrarily large.

 Exercise 4.3.4

suppose lim n→∞ an = ∞ and lim n→∞ bn = −∞ and α ∈ R . Prove or give a counterexample:
a. limn→∞ an + bn = ∞

b. limn→∞ an bn = ∞

c. limn→∞ α an = ∞

d. limn→∞ α an = −∞

Finally, a sequence can diverge in other ways as the following problem displays.

4.3.2 https://math.libretexts.org/@go/page/7939
 Exercise 4.3.5

Show that each of the following sequences diverge.



a. ((−1) n
)
n=1

b. ((−1) n
n)
n=1
p
1 if n = 2 for some p ∈ N
c. an = { 1
otherwise
n

 Exercise 4.3.6

Suppose that (a ∞
n )n=1 diverges but not to infinity and that α is a real number. What conditions on α will guarantee that:
a. (αa ∞
n )n=1 converges?
b. (αa ∞
n )n=1 diverges?

 Exercise 4.3.7

Show that if |r| > 1 then (r n ∞


)
n=1
diverges. Will it diverge to infinity?

References
1
It is also true that (−∞, ∞) is both open and closed, but an explanation of this would take us too far a field.

This page titled 4.3: Divergence of a Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene
Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

4.3.3 https://math.libretexts.org/@go/page/7939
4.E: Convergence of Sequences and Series (Exercises)
Q1
Prove that if limn→∞ sn = s then limn→∞ | sn | = |s| . Prove that the converse is true when s =0 , but it is not necessarily true
otherwise.

Q2
a. Let (s ) and (t ) be sequences with s ≤ t , ∀n . Suppose lim
n n n n s = s and lim n→∞ t = t . Prove s ≤ t . [Hint: Assume
n n→∞ n

for contradiction, that s > t and use the definition of convergence with ε = \(f racs − t2 to produce an n with s > t .] n n

b. Prove that if a sequence converges, then its limit is unique. That is, prove that if lim s = s and lim s = s , then
n→∞ n n→∞ n

s =t.

Q3
sn
Prove that if the sequence (s n) is bounded then lim n→∞ (
n
) =0 .

Q4
a. Prove that if x ≠ 1 , then
n+1
2 n
1 −x
1 +x +x +⋯ +x = (4.E.1)
1 −x

b. Use (a) to prove that if |x| < 1, then lim n→∞ (∑


n

j=0
j
x ) =
1−x
1

Q5
Prove
2 k
a0 + a1 n + a2 n + ⋯ + ak n ak
lim = (4.E.2)
2 k
n→∞ b0 + b1 n + b2 n + ⋯ + bk n bk

provided b ≠ 0 . [Notice that since a polynomial only has finitely many roots, then the denominator will be non-zero when n is
k

sufficiently large.]

Q6
Prove that if lim n→∞ sn = s and lim n→∞ (sn − tn ) = 0 , then lim n→∞ tn = s .

Q7
a. Prove that if lim n→∞ sn = s and s < t , then there exists a real number N such that if n > N then s < t . n

b. Prove that if lim n→∞ sn = s and r < s , then there exists a real number M such that if n > M then r < s . n

Q8
sn+1
Suppose (s n) is a sequence of positive numbers such that lim n→∞ (
sn
) =L

a. Prove that if L < 1 , then lim s = 0 . [Hint: Choose R with L < R < 1 . By the previous problem, ∃ N such that if
n→∞ n
sn+1
n > N , then < R . Let n > N be fixed and show s . Conclude that lim = 0 and let
k
0 <R s n0 +k s n0 k→∞ n0 +k
sn

n = n + k .]
0

b. Let c be a positive real number. Prove


n
c
lim ( ) =0 (4.E.3)
n→∞ n!

This page titled 4.E: Convergence of Sequences and Series (Exercises) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

4.E.1 https://math.libretexts.org/@go/page/7940
CHAPTER OVERVIEW

5: Convergence of the Taylor Series- A “Tayl” of Three Remainders


5.1: The Integral Form of the Remainder
5.2: Lagrange’s Form of the Remainder
5.3: Cauchy’s Form of the Remainder
5.E: Convergence of the Taylor Series- A “Tayl” of Three Remainders (Exercises)

Thumbnail: Brook Taylor (1685-1731) was an English mathematician who is best known for Taylor's theorem and the Taylor
series.

This page titled 5: Convergence of the Taylor Series- A “Tayl” of Three Remainders is shared under a CC BY-NC-SA 4.0 license and was
authored, remixed, and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and
standards of the LibreTexts platform; a detailed edit history is available upon request.

1
5.1: The Integral Form of the Remainder
 Learning Objectives
Explain the integral form of the remainder

Now that we have a rigorous definition of the convergence of a sequence, let’s apply this to Taylor series. Recall that the Taylor
series of a function f (x) expanded about the point a is given by
∞ (n) ′ ′′
f (a) f (a) f (a)
n 2
∑ (x − a) = f (a) + (x − a) + (x − a) +⋯ (5.1.1)
n! 1! 2!
n=0

( n)
f (a)
When we say that ∑ ∞

n=0 n!
(x − a)
n
for a particular value of x,what we mean is that the sequence of partial sums

n (j) ′ ′ ′′
f (a) f (a) f (a) f (a)
j 2
(∑ (x − a) ) = (f (a), f (a) + (x − a), f (a) + (x − a) + (x − a) + ⋯) (5.1.2)
j! 1! 1! 2!
j=0
n=0

converges to the number f (x). Note that the index in the summation was changed to j to allow n to represent the index of the
sequence of partial sums. As intimidating as this may look, bear in mind that for a fixed real number x, this is still a sequence of
( n) ( j)
∞ f (a) n f (a)
real numbers so, that saying f (x) = ∑
n=0 n!
(x − a)
n
means that limn→∞ ( ∑
j=0 j!
j
(x − a) ) = f (x) and in the
previous chapter we developed some tools to examine this phenomenon. In particular, we know that
( j)
f (a)
limn→∞ ( ∑
n

j=0 j!
(x − a) ) = f (x)
j
is equivalent to

n (j)
f (a)
j
lim [f (x) − ( ∑ (x − a) )] = 0 (5.1.3)
n→∞ j!
j=0

We saw an example of this in the last chapter with the geometric series 1 +x +x
2
+x
3
+⋯ . Problem Q4 of the last chapter
basically had you show that this series converges to 1

1−x
, for |x| < 1 by showing that lim n→∞ [
1−x
1
− (∑
n

j=0
j
x )] = 0 .

There is generally not a readily recognizable closed form for the partial sum for a Taylor series. The geometric series is a special
case. Fortunately, for the issue at hand (convergence of a Taylor series), we don’t need to analyze the series itself. What we need to
show is that the difference between the function and the n partial sum converges to zero. This difference is called the remainder
th

(of the Taylor series). (Why?)


While it is true that the remainder is simply
n (j)
f (a)
j
f (x) − ( ∑ (x − a) ) (5.1.4)
j!
j=0

this form is not easy to work with. Fortunately, a number of alternate versions of this remainder are available. We will explore these
in this chapter. Recall the result from Theorem 3.1.2 from Chapter 3,
′ ′′ (n) x
f (a) f (a) f (a) 1
2 n (n+1) n
f (x) = f (a) + (x − a) + (x − a) +⋯ + (x − a) + ∫ f (t)(x − t) dt (5.1.5)
1! 2! n! n! t=a

We can use this by rewriting it as


n (j) x
f (a) 1
j (n+1) n
f (x) − ( ∑ (x − a) ) = ∫ f (t)(x − t) dt (5.1.6)
j! n! t=a
j=0

The left hand side of Equation 5.1.6 is called the integral form of the remainder for the Taylor series of f (x) , and the Taylor
x
series will converge to f (x) exactly when the sequence limn→∞ (
1

n!

t=a
f
(n+1)
(t)(x − t) dt)
n
converges to zero. It turns out

5.1.1 https://math.libretexts.org/@go/page/7944
( j)
f (a)
that this form of the remainder is often easier to handle than the original f (x) − ( ∑
n

j=0 j!
j
(x − a) ) and we can use it to
obtain some general results.

 Theorem 5.1.1: Taylor’s Series

If there exists a real number B such that |f (n+1)


(t)| ≤ B for all nonnegative integers n and for all t
on an interval containing a and x , then
x
1 (n+1) n
lim ( ∫ f (t)(x − t) dt) = 0 (5.1.7)
n→∞ n! t=a

and so
∞ (n)
f (a)
n
f(x) = ∑ (x − a) (5.1.8)
n!
n=0

In order to prove this, it might help to first prove the following Lemma.

 Lemma 5.1.1: Triangle Inequality for Integrals

If f and |f| are integrable functions and a ≤ b , then


b b
∣ ∣
∣∫ f(t)dt∣ ≤ ∫ |f(t)| dt (5.1.9)
∣ t=a ∣ t=a

 Exercise 5.1.1

Prove Lemma 5.1.1.

Hint
−|f (t)| ≤ f (t) ≤ |f (t)| .

 Exercise 5.1.2
Prove Theorem 5.1.1.

Hint
You might want to use Problem Q8 of Chapter 4. Also there are two cases to consider: a < x and x < a (the case x = a is
trivial). You will find that this is true in general. This is why we will often indicate that t is between a and x as in the
theorem. In the case x < a , notice that
x x
∣ (n+1) n
∣ ∣ n+1 (n+1) n

∣∫ f (t)(x − t) dt∣ = ∣(−1 ) ∫ f (t)(t − x ) dt∣
∣ t=a ∣ ∣ t=a

x
∣ (n+1) n

= ∣∫ f (t)(t − x ) dt∣
∣ ∣
t=a

5.1.2 https://math.libretexts.org/@go/page/7944
 Exercise 5.1.3

Use Theorem 5.1.1 to prove that for any real number x


∞ n 2n+1
(−1) x
a. sin x = ∑
(2n + 1)!
n=0
∞ n 2n
(−1) x
b. cos x = ∑
(2n)!
n=0
∞ n
x
c. e
x
=∑
n!
n=0

Part c of exercise 5.1.3 shows that the Taylor series of e expanded at zero converges to e for any real number x. Theorem 5.1.1
x x

can be used in a similar fashion to show that


∞ a n
e (x − a)
x
e =∑ (5.1.10)
n!
n=0

for any real numbers a and x.


n

Recall that in section 2.1 we showed that if we define the function E(x) by the power series ∑ then ∞

n=0
x

n!

E(x + y) = E(x)E(y) . This, of course, is just the familiar addition property of integer coefficients extended to any real number.

In Chapter 2 we had to assume that defining E(x) as a series was meaningful because we did not address the convergence of the
series in that chapter. Now that we know the series converges for any real number we see that the definition
∞ n
x
x
f (x) = e =∑ (5.1.11)
n!
n=0

is in fact valid.
Assuming that we can differentiate this series term-by-term it is straightforward to show that ′
f (x) = f (x) . Along with Taylor’s
formula this can then be used to show that e a+b
=e e
a b
more elegantly than the rather cumbersome proof in section 2.1, as the
following problem shows.

 Exercise 5.1.4

Recall that if f (x) = e then f x ′


(x) = e
x
. Use this along with the Taylor series expansion of e about a to show that x

a+b a b
e =e e

Theorem 5.1.1 is a nice “first step” toward a rigorous theory of the convergence of Taylor series, but it is not applicable in all
−−−−−
cases. For example, consider the function f (x) = √1 + x . As we saw in Chapter 2, Exercise 2.2.9, this function’s Maclaurin series
(the binomial series for (1 + x) )appears to be converging to the function for x ∈ (−1, 1). While this is, in fact, true, the above
1/2

proposition does not apply. If we consider the derivatives of f (t) = (1 + t) , we obtain: 1/2

1 1
′ −1
f (t) = (1 + t) 2 (5.1.12)
2

′′
1 1 1
−2
f (t) = ( − 1) (1 + t) 2
(5.1.13)
2 2

′′′
1 1 1 1
−2
f (t) = ( − 1) ( − 2) (1 + t) 2
(5.1.14)
2 2 2

⋮ (5.1.15)

1 1 1 1 1
n+1 −(n+1)
f (t) = ( − 1) ( − 2) ⋯ ( − n) (1 + t) 2 (5.1.16)
2 2 2 2

5.1.3 https://math.libretexts.org/@go/page/7944
Notice that

n+1
1 1 1 1
∣f (0)∣
∣ ∣ = (1 − ) (2 − ) ⋯ (n − ) (5.1.17)
2 2 2 2

Since this sequence grows without bound as n → ∞ , then there is no chance for us to find a number B to act as a bound for all of
the derviatives of f on any interval containing 0 and x, and so the hypothesis of Theorem 5.1.1 will never be satisfied. We need a
more delicate argument to prove that
1 1 1 1 1
1 ( − 1) ( − 1) ( − 2)
−−−−− 2 2 2 2 2 2 3
√1 + x = 1 + x+ x + x +⋯ (5.1.18)
2 2! 3!

is valid for x ∈ (−1, 1). To accomplish this task, we will need to express the remainder of the Taylor series differently. Fortunately,
there are at least two such alternate forms.

This page titled 5.1: The Integral Form of the Remainder is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

5.1.4 https://math.libretexts.org/@go/page/7944
5.2: Lagrange’s Form of the Remainder
 Learning Objectives
Explain Lagrange's Form of the remainder

Joseph-Louis Lagrange provided an alternate form for the remainder in Taylor series in his 1797 work Théorie des functions
analytiques. Lagrange’s form of the remainder is as follows.

 Theorem 5.2.1: Lagrange’s Form of the Remainder

Suppose f is a function such that f (n+1)


(t) is continuous on an interval containing a and x . Then
n (j) (n+1)
f (a) f (c)
j n+1
f(x) − (∑ (x − a) ) = (x − a) (5.2.1)
j! (n + 1)!
j=0

where c is some number between a and x .

 Proof

Note first that the result is true when x = a as both sides reduce to 0 (in that case c = x = a .) We
will prove the case where a < x ; the case x < a will be an exercise.
First, we already have
n (j) x
f (a) 1
j (n+1) n
f(x) − (∑ (x − a) ) = ∫ f (t)(x − t) dt
j! n! t=a
j=0

so it suffices to show that


x (n+1)
1 f (c)
(n+1) n n+1
∫ f (t)(x − t) dt = (x − a)
n! t=a
n+1

for some c with c ∈ [a, x] . To this end, let


(n+1)
M = max (f (t))
a≤t≤x

and
(n+1)
m = min (f (t))
a≤t≤x

Note that for all t ∈ [a, x] , we have m ≤ f (n+1)


(t) ≤ M . Since x − t ≥ 0 , this gives us
n (n+1) n n
m(x − t) ≤ f (t)(x − t) ≤ M (x − t)

and so
x x x
n (n+1) n n
∫ m(x − t) dt ≤ ∫ f (t)(x − t) dt ≤ ∫ M (x − t) dt
t=a t=a t=a

5.2.1 https://math.libretexts.org/@go/page/7945
Computing the outside integrals, we have
x x x

n (n+1) n n
m∫ (x − t) dt ≤ ∫ f (t)(x − t) dt ≤ M ∫ (x − t) dt
t=a t=a t=a

n+1 x n+1
(x − a) (x − a)
(n+1) n
m ≤ ∫ f (t)(x − t) dt ≤ M
n+1 t=a
n+1

x
(n+1) n
∫ f (t)(x − t) dt
t=a
m ≤ ≤ M
(x − a)n+1
( )
n+1

Since
x
(n+1) n
∫ f (t)(x − t) dt
t=a

n+1
(x − a)
( )
n+1

is a value that lies between the maximum and minimum of f (n+1)


on [a, x] , then by the Intermediate
Value Theorem, there must exist a number c ∈ [a, x] with
x
(n+1) n
∫ f (t)(x − t) dt
(n+1) t=a
f (c) =
n+1
(x − a)
( )
n+1

This gives us
x (n+1)
f (c)
(n+1) n n+1
∫ f (t)(x − t) dt = (x − a)
t=a
n+1

And the result follows.


 Exercise 5.2.1

Prove Theorem 5.2.1 for the case where x < a .

Hint
Note that
x a
(n+1) n n+1 (n+1) n
∫ f (t)(x − t) dt = (−1 ) ∫ f (t)(t − x ) dt
t=a t=x

Use the same argument on this integral. It will work out in the end. Really! You just need to keep track of all of the
negatives.

This is not Lagrange’s proof. He did not use the integral form of the remainder. However, this is similar
to Lagrange’s proof in that he also used the Intermediate Value Theorem (IVT) and Extreme Value

5.2.2 https://math.libretexts.org/@go/page/7945
Theorem (EVT) much as we did. In Lagrange’s day, these were taken to be obviously true for a
continuous function and we have followed Lagrange’s lead by assuming the IVT and the EVT. However,
in mathematics we need to keep our assumptions few and simple. The IVT and the EVT do not satisfy
this need in the sense that both can be proved from simpler ideas. We will return to this in Chapter 7.
(n+1)
f (c)
Also, a word of caution about this: Lagrange’s form of the remainder is (x − a)
n+1
, where c
(n + 1)!

is some number between a and x . The proof does not indicate what this c might be and, in fact, this c
changes as n changes. All we know is that this c lies between a and x . To illustrate this issue and its
potential dangers, consider the following problem where we have a chance to compute the value of c for
1
the function f(x) = .
1+x

 Exercise 5.2.2

This problem investigates the Taylor series representation


1 2 3
= 1 −x +x −x +⋯
1 +x

n+1
1 − (−x)
a. Use the fact that = 1 −x +x
2
−x
3
+ ⋯ + (−x )
n
to compute the remainder
1 +x

1 2 3 n
− (1 − x + x −x + ⋯ + (−x ) )
1 +x

1
Specifically, compute this remainder when x = 1 and conclude that the Taylor series does not converge to when
1 +x

x =1 .
b. Compare the remainder in part a with the Lagrange form of the remainder to determine what c is when x = 1 .
1
c. Consider the following argument: If f (x) = , then
1 +x

n+1
(−1 ) (n + 1)!
(n+1)
f (c) =
n+2
(1 + c)

so the Lagrange form of the remainder when x = 1 is given by


n+1 n+1
(−1 ) (n + 1)! (−1)
=
n+2 n+2
(n + 1)!(1 + c) (1 + c)

where c ∈ [0, 1]. It can be seen in part b that c ≠ 0 . Thus 1 + c > 1 and so by Exercise 4.1.4, the Lagrange remainder
1
converges to 0 as n → ∞ . This argument would suggest that the Taylor series converges to for x = 1 . However, we
1 +x

know from part (a) that this is incorrect. What is wrong with the argument?

Even though there are potential dangers in misusing the Lagrange form of the remainder, it is a useful
form. For example, armed with the Lagrange form of the remainder, we can prove the following
theorem.

 Theorem 5.2.2: The binomial series

The binomial series

5.2.3 https://math.libretexts.org/@go/page/7945
1 1 1 1 1
( − 1) ( − 1) ( − 2)
1 2 2 2 2 2
2 3
1+ x+ x + x +⋯ (5.2.2)
2 2! 3!

−−−−−
converges to √1 + x for x ∈ [0, 1].

 Proof
−−−−−
First note that the binomial series is, in fact, the Taylor series for the function f(x) = √1 + x
expanded about a = 0 . If we let x be a fixed number with 0 ≤ x ≤ 1 , then it suffices to show that
the Lagrange form of the remainder converges to 0. With this in mind, notice that
1
1 1 1 −(n+1)
(n+1)
f (t) = ( )( − 1) ⋯ ( − n) (1 + t) 2
2 2 2

and so the Lagrange form of the remainder is


1 1 1
( )( − 1) ⋯ ( − n)
(n+1) n+1
f (c) 2 2 2 x
n+1
x =
(n + 1)! (n + 1)! 1
n+

(1 + c) 2

where c is some number between 0 and x . Since 0 ≤ x ≤ 1 and 1+c ≥ 1 , then we have
1
≤ 1 , and so
1+c

5.2.4 https://math.libretexts.org/@go/page/7945
∣ 1 1 1 ∣
∣ ( )( − 1) ⋯ ( − n)
n+1

2 2 2 x
∣ ∣
0 ≤
∣ 1 ∣
(n + 1)!
∣ n+ ∣
∣ (1 + c) 2 ∣

1 1 1
( ) (1 − ) ⋯ (n − )
n+1
2 2 2 x
=
(n + 1)! 1
n+

(1 + c) 2

1 1 3 5 2n − 1
( )( )( )( )⋯( )
2 2 2 2 2 1
n+1
= x
(n + 1)! 1
n+

(1 + c) 2

1 ⋅ 1 ⋅ 3 ⋅ 5 ⋯ (2n − 1)

n+1
2 (n + 1)!

1 ⋅ 3 ⋅ 5 ⋅ (2n − 1)
=
2 ⋅ 4 ⋅ 6 ⋯ 2n ⋅ (2n + 2)

1 3 5 2n − 1 1
= ( )⋅ ( )⋅ ( )⋯ ⋅
2 4 6 2n 2n + 2

1

2n + 2

1
Since lim n→∞ = 0 = limn→∞ 0 , then by the Squeeze Theorem,
2n + 2

∣ f (n+1) (c) ∣
n+1
lim ∣ x ∣ =0
n→∞
∣ (n + 1)! ∣

so
(n+1)
f (c)
n+1
lim ( x ) =0
n→∞ (n + 1)!

1 1 1 1 1
( − 1) ( − 1) ( − 2)
1 2 2 2 2 2 −−−−−
Thus the Taylor series 1 + x+ x
2
+ x
3
+⋯ converges to √1 + x for 0 ≤ x ≤ 1 .
2 2! 3!

1
Unfortunately, this proof will not work for −1 < x < 0 . In this case, the fact that x ≤c ≤0 makes 1 +c ≤ 1 . Thus ≥1
1 +c

and so the inequality


1 1 3 5 2n − 1
( )( )( )( )⋯( ) n+1
2 2 2 2 2 |x| 1 ⋅ 1 ⋅ 3 ⋅ 5 ⋯ (2n − 1)
≤ (5.2.3)
1 n+1
(n + 1)! 2 (n + 1)!
n+

(1 + c) 2

may not hold.

5.2.5 https://math.libretexts.org/@go/page/7945
 Exercise 5.2.3
1 ∣ x ∣
Show that if − ≤x ≤c ≤0 , then ∣ ∣ ≤1 and modify the above proof to show that the binomial series converges to
2 ∣ 1 +c ∣

−−−−− 1
√1 + x for x ∈ [− , 0].
2

1
To take care of the case where −1 < x < − , we will use yet another form of the remainder for Taylor series. However before we
2
tackle that, we will use the Lagrange form of the remainder to address something mentioned in Chapter 3. Recall that we noticed
1
that the series representation = 1 −x +x
2
−x
3
+⋯ did not work when x = 1, however we noticed that the series
1 +x
1
obtained by integrating term by term did seem to converge to the antiderivative of . Specifically, we have the Taylor series
1 +x

1 1
2 3
ln(1 + x) = x − x + x −⋯ (5.2.4)
2 3

1 1 1
Substituting x = 1 into this provided the convergent series 1 − + − +⋯ . We made the claim that this, in fact, converges
2 3 4
to ln 2, but that this was not obvious. The Lagrange form of the remainder gives us the machinery to prove this.

 Exercise 5.2.4
a. Compute the Lagrange form of the remainder for the Maclaurin series for ln(1 + x) .
b. Show that when x = 1 , the Lagrange form of the remainder converges to 0 and so the equation
1 1 1
ln 2 = 1 − + − +⋯ is actually correct.
2 3 4

This page titled 5.2: Lagrange’s Form of the Remainder is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

5.2.6 https://math.libretexts.org/@go/page/7945
5.3: Cauchy’s Form of the Remainder
 Learning Objectives
Explain Cauchy's form of the remainder

In his 1823 work, Résumé des le¸cons données à l’ecole royale polytechnique sur le calcul infintésimal, Augustin Cauchy provided
another form of the remainder for Taylor series.

 Theorem 5.3.1: Cauchy’s Form of the Remainder

Suppose f is a function such that f (n+1)


(t) is continuous on an interval containing a and x . Then
n (j) (n+1)
f (a) f (c)
j n
f(x) − (∑ (x − a) ) = (x − c) (x − a) (5.3.1)
j! n!
j=0

where c is some number between a and x .

 Exercise 5.3.1
Prove Theorem 5.3.1 using an argument similar to the one used in the proof of Theorem 5.2.1. Don’t forget there are two cases
to consider.

Using Cauchy’s form of the remainder, we can prove that the binomial series
1 1 1 1 1
( − 1) ( − 1) ( − 2)
1 2 2 2 2 2
2 3
1+ x + x + x +⋯ (5.3.2)
2 2! 3!

−−−−−
converges to √1 + x for x ∈ (−1, 0) . With this in mind, let x be a fixed number with −1 < x < 0 and
1

consider that the binomial series is the Maclaurin series for the function f(x) = (1 + x) 2 . As we saw
before,
1
1 1 1 −(n+1)
(n+1)
f (t) = ( )( − 1) ⋯ ( − n) (1 + t) 2 (5.3.3)
2 2 2

so the Cauchy form of the remainder is given by


∣ 1 1 1 ∣
∣ ( − 1) ⋯ ( − n) ∣
(n+1) n
∣f (c) ∣ 2 2 2 (x − c)
n ∣ ∣
0 ≤ ∣ (x − c) (x − 0)∣ = ⋅ x (5.3.4)
∣ 1 ∣
∣ n! ∣ n!
∣ n+ ∣
∣ (1 + c) 2 ∣

where c is some number with x ≤ c ≤ 0 . Thus we have

5.3.1 https://math.libretexts.org/@go/page/7946
∣ 1 1 1 ∣
∣ ( )( − 1) ⋯ ( − n)
n

2 2 2 (x − c) x
∣ ∣
0 ≤
∣ 1 ∣
n!
∣ n+ ∣
∣ (1 + c) 2 ∣

1 1 1
( ) (1 − ) ⋯ (n − ) n
2 2 2 |x − c| |x|
=
n! 1
n+

(1 + c) 2

1 1 3 5 2n − 1
( )( )( )( )⋯( )
n
2 2 2 2 2 (c − x) |x|
= ) −−−−
n
n! (1 + c) √1 + c
n
1 ⋅ 1 ⋅ 3 ⋅ 5 ⋯ (2n − 1) c−x |x|
≤ ( ) −−−−
n+1
2 n! 1+c √1 + c
n
1 ⋅ 1 ⋅ 3 ⋅ 5 ⋅ (2n − 1) c−x |x|
= ( )
−−−−
2 ⋅ 2 ⋅ 4 ⋅ 6 ⋯ 2n 1+c √1 + c
n
1 1 3 5 2n − 1 c−x |x|
= ⋅ ⋅ ⋅ ⋯ ⋅ ( ) −−−−
2 2 4 6 2n 1+c √1 + c
n
c−x |x|
≤ ( ) −−−−
1+c √1 + c

1 1
Notice that if −1 < x ≤ c , then 0 < 1+x ≤ 1+c . Thus 0 < ≤ and
1+c 1+x
1 1
−−−−

−−−−−
. Thus we have
√1 + c √1 + x

∣ 1 1 1 ∣
∣ ( )( − 1) ⋯ ( − n)
n
∣ n
2 2 2 (x − c) x c−x |x|
∣ ∣
0 ≤ ≤ ( ) (5.3.5)
∣ ∣ −−−−
n! 1 1+c √1 + c
∣ n+ ∣
∣ (1 + c) 2 ∣

 Exercise 5.3.2
c −x
Suppose −1 < x ≤ c ≤ 0 and consider the function g(c) = . Show that on ,
[x, 0] g is increasing and use this to
1 +x

conclude that for −1 < x ≤ c ≤ 0 ,


c −x
≤ |x|
1 +x

−−−−−
Use this fact to finish the proof that the binomial series converges to √1 + x for −1 < x < 0 .

The proofs of both the Lagrange form and the Cauchy form of the remainder for Taylor series made use
of two crucial facts about continuous functions.
First, we assumed the Extreme Value Theorem: Any continuous function on a closed bounded interval
assumes its maximum and minimum somewhere on the interval.

5.3.2 https://math.libretexts.org/@go/page/7946
Second, we assumed that any continuous function satisfied the Intermediate Value Theorem: If a
continuous function takes on two different values, then it must take on any value between those two
values.

Figure 5.3.1 : Augustin Cauchy


Mathematicians in the late 1700’s and early 1800’s typically considered these facts to be intuitively
obvious. This was natural since our understanding of continuity at that time was, solely, intuitive.
Intuition is a useful tool, but as we have seen before it is also unreliable. For example consider the
following function.

⎧ 1
x sin( ) if x ≠ 0
f(x) = ⎨ x (5.3.6)

0 if x = 0

Is this function continuous at 0? Near zero its graph looks like this:

Figure 5.3.2 : Graph of the function above.


1
but this graph must be taken with a grain of salt as sin( ) oscillates infinitely often as x nears zero.
x

No matter what your guess may be, it is clear that it is hard to analyze such a function armed with only
an intuitive notion of continuity. We will revisit this example in the next chapter.

5.3.3 https://math.libretexts.org/@go/page/7946
As with convergence, continuity is more subtle than it first appears. We put convergence on solid ground
by providing a completely analytic definition in the previous chapter. What we need to do in the next
chapter is provide a completely rigorous definition for continuity.

This page titled 5.3: Cauchy’s Form of the Remainder is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

5.3.4 https://math.libretexts.org/@go/page/7946
5.E: Convergence of the Taylor Series- A “Tayl” of Three Remainders (Exercises)
Q1
Find the Integral form, Lagrange form, and Cauchy form of the remainder for Taylor series for the following functions expanded
about the given values of a .
a. f (x) = e , a = 0
x

b. f (x) = √−x, a = 1

c. f (x) = (1 + x) , a = 0
α

d. f (x) = , a = 3
1

e. f (x) = ln x , a = 2
f. f (x) = cos x, a = π

This page titled 5.E: Convergence of the Taylor Series- A “Tayl” of Three Remainders (Exercises) is shared under a CC BY-NC-SA 4.0 license
and was authored, remixed, and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and
standards of the LibreTexts platform; a detailed edit history is available upon request.

5.E.1 https://math.libretexts.org/@go/page/7947
CHAPTER OVERVIEW

6: Continuity - What It Isn’t and What It Is


6.1: An Analytic Denition of Continuity
6.2: Sequences and Continuity
6.3: The Denition of the Limit of a Function
6.4: The Derivative - An Afterthought
6.E: Continuity - What It Isn’t and What It Is (Exercises)

Thumbnail: Cauchy around 1840. Lithography by Zéphirin Belliard after a painting by Jean Roller. (Public Domain).

This page titled 6: Continuity - What It Isn’t and What It Is is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

1
6.1: An Analytic Denition of Continuity
 Learning Objectives
Explain continuity

Before the invention of calculus, the notion of continuity was treated intuitively if it was treated at all. At first pass, it seems a very
simple idea based solidly in our experience of the real world. Standing on the bank we see a river ow past us continuously, not by
tiny jerks. Even when the ow might seem at first to be discontinuous, as when it drops precipitously over a cliff, a closer
examination shows that it really is not. As the water approaches the cliff it speeds up. When it finally goes over it accelerates very
quickly but no matter how fast it goes it moves continuously, moving from here to there by occupying every point in between. This
is continuous motion. It never disappears over there and instantaneously reappears over here. That would be discontinuous motion.
Similarly, a thrown stone ies continuously (and smoothly) from release point to landing point, passing through each point in its
path.
But wait. If the stone passes through discrete points it must be doing so by teeny tiny little jerks, mustn’t it? Otherwise how would
it get from one point to the next? Is it possible that motion in the real world, much like motion in a movie, is really composed of
tiny jerks from one point to the next but that these tiny jerks are simply too small and too fast for our senses to detect?
If so, then the real world is more like the rational number line (Q) from Chapter 1 than the real number line (R). In that case,

motion really consists of jumping discretely over the “missing” points (like √2) as we move from here to there. That may seem
like a bizarre idea to you – it does to us as well – but the idea of continuous motion is equally bizarre. It’s just a little harder to see
why.
The real world will be what it is regardless of what we believe it to be, but fortunately in mathematics we are not constrained to live
in it. So we won’t even try. We will simply postulate that no such jerkiness exists; that all motion is continuous.
However we are constrained to live with the logical consequences of our assumptions, once they are made. These will lead us into
some very deep waters indeed.
The intuitive treatment of continuity was maintained throughout the 1700’s as it was not generally perceived that a truly rigorous
definition was necessary. Consider the following definition given by Euler in 1748.

 Euler's Definition of Continuity


A continuous curve is one such that its nature can be expressed by a single function of x. If a curve is of such a nature that for
its various parts ... different functions of x are required for its expression, ..., then we call such a curve discontinuous.

However, the complexities associated with Fourier series and the types of functions that they represented caused mathematicians in
the early 1800’s to rethink their notions of continuity. As we saw in Part II, the graph of the function defined by the Fourier series
∞ k
4 (−1)
∑ cos((2k + 1)πx) (6.1.1)
π (2k + 1)
k=0

looked like this:

6.1.1 https://math.libretexts.org/@go/page/7951
Figure 6.1.1 : Graph of function defined by the Fourier series.
This function went against Euler’s notion of what a continuous function should be. Here, an infinite sum of continuous cosine
curves provided a single expression which resulted in a “discontinuous” curve. But as we’ve seen this didn’t happen with power
series and an intuitive notion of continuity is inadequate to explain the difference. Even more perplexing is the following situation.
Intuitively, one would think that a continuous curve should have a tangent line at at least one point. It may have a number of jagged
points to it, but it should be “smooth” somewhere. An example of this would be f (x) = x . Its graph is given by
2/3

Figure 6.1.2 : Graph of f (x) = x 2/3


.
This function is not differentiable at the origin but it is differentiable everywhere else. One could certainly come up with examples
of functions which fail to be differentiable at any number of points but, intuitively, it would be reasonable to expect that a
continuous function should be differentiable somewhere. We might conjecture the following:

 Conjecture
If f is continuous on an interval I then there is some a ∈ I , such that f ′
(a) exists.

6.1.2 https://math.libretexts.org/@go/page/7951
Figure 6.1.3 : Karl Weierstrass.
Surprisingly, in 1872, Karl Weierstrass showed that the above conjecture is FALSE. He did this by displaying the counterexample:

n n
f (x) = ∑ b cos(a πx) (6.1.2)

n=0

Weierstrass showed that if a is an odd integer, b ∈ (0, 1), and ab > 1 + π , then f is continuous everywhere, but is nowhere
3

differentiable. Such a function is somewhat “fractal” in nature, and it is clear that a definition of continuity relying on intuition is
inadequate to study it.

 Exercise 6.1.1
n
a. Given f (x) = ∑ ∞

n=0
( ) cos(a πx), what is the smallest value of a for which f satisfies Weierstrass’ criterion to be
1

2
n

continuous and nowhere differentiable.


N n
b. Let f (x, N ) = ∑ n=0
( ) cos(13 πx) and use a computer algebra system to plot f (x, N ) for N = 0, 1, 2, 3, 4, 10 and
1

2
n

x ∈ [0, 1].

c. Plot f (x, 10) for x ∈ [0, c], where c = 0.1, 0.01, 0.001, 0.0001, 0.00001
. Based upon what you see in parts b and c, why
would we describe the function to be somewhat “fractal” in nature?

Just as it was important to define convergence with a rigorous definition without appealing to intuition or geometric
representations, it is imperative that we define continuity in a rigorous fashion not relying on graphs.
The first appearance of a definition of continuity which did not rely on geometry or intuition was given in 1817 by Bernhard
Bolzano in a paper published in the Proceedings of the Prague Scientific Society entitled
Rein analytischer Beweis des Lehrsatzes dass zwieschen je zwey Werthen, die ein entgegengesetztes Resultat gewaehren,
wenigstens eine reele Wurzel der Gleichung liege (Purely Analytic Proof of the Theorem that Between Any Two Values that
Yield Results of Opposite Sign There Will be at Least One Real Root of the Equation).

Figure 6.1.4 : Bernhard Bolzano


From the title it should be clear that in this paper Bolzano is proving the Intermediate Value Theorem. To do this he needs a
completely analytic definition of continuity. The substance of Bolzano’s idea is that if f is continuous at a point a then f (x) should

6.1.3 https://math.libretexts.org/@go/page/7951
be “close to” f (a) whenever x is “close enough to” a . More precisely, Bolzano said that f is continuous at a provided

|f (x) − f (a)| can be made smaller than any given quantity provided we make |x − a| sufficiently small.
The language Bolzano uses is very similar to the language Leibniz used when he postulated the existence of infinitesimally small
numbers. Leibniz said that infinitesimals are “smaller than any given quantity but not zero.” Bolzano says that “|f (x) − f (a)| can
be made smaller than any given quantity provided we make |x − a| sufficiently small.” But Bolzano stops short of saying that
|x − a| is infinitesimally small. Given a, we can choose x so that |x − a| is smaller than any real number we could name, say b ,

provided we name b first, but for any given choice of x, |x − a| , and b are both still real numbers. Possibly very small real numbers
to be sure, but real numbers nonetheless. Infinitesimals have no place in Bolzano’s construction.
Bolzano’s paper was not well known when Cauchy proposed a similar definition in his Cours d’analyse [1] of 1821 so it is usually
Cauchy who is credited with this definition, but even Cauchy’s definition is not quite tight enough for modern standards. It was
Karl Weierstrass in 1859 who finally gave the modern definition.

 Definition
We say that a function f is continuous at a provided that for any ε >0 , there exists aδ > 0 such that if |x − a| < δ then
|f (x) − f (a)| < ε .

Notice that the definition of continuity of a function is done point-by-point. A function can certainly be continuous at some points
while discontinuous at others. When we say that f is continuous on an interval, then we mean that it is continuous at every point of
that interval and, in theory, we would need to use the above definition to check continuity at each individual point.
Our definition fits the bill in that it does not rely on either intuition or graphs, but it is this very non-intuitiveness that makes it hard
to grasp. It usually takes some time to become comfortable with this definition, let alone use it to prove theorems such as the
Extreme Value Theorem and Intermediate Value Theorem. So let’s go slowly to develop a feel for it.
This definition spells out a completely black and white procedure: you give me a positive number ε , and I must be able to find a
positive number δ which satisfies a certain property. If I can always do that then the function is continuous at the point of interest.
This definition also makes very precise what we mean when we say that f (x) should be “close to” f (a) whenever x is “close
enough to” a . For example, intuitively we know that f (x) = x should be continuous at x = 2 . This means that we should be able
2

to get x to within, say, ε = 0.1 of 4 provided we make x close enough to 2. Specifically, we want 3.9 < x < 4.1 . This happens
2 2

−−
− −−− −−− −
−−
exactly when √3.9 < x < √4.1 . Using the fact that √3.9 < 1.98 and 2.02 < √4.1, then we can see that if we get x to within
−−− −−−
δ = 0.02 of 2 , then √3.9 < 1.98 < x < 2.02 < √4.1 and so x will be within 0.1 of 4 . This is very straightforward. What makes
2

this situation more difficult is that we must be able to do this for any ε > 0 .
Notice the similarity between this definition and the definition of convergence of a sequence. Both definitions have the challenge of
an ε > 0 . In the definition of lim n→∞s = s , we had to get s
n to within ε of s by making n large enough. For sequences, the
n

challenge lies in making |s − s| sufficiently small. More precisely, given ε > 0 we need to decide how large n should be to
n

guarantee that |s − s| < ε .


n

In our definition of continuity, we still need to make something small (namely |f (x) − f (a)| < ε ), only this time, we need to
determine how close x must be to a to ensure this will happen instead of determining how large n must be.
What makes f continuous at a is the arbitrary nature of ε (as long as it is positive). As ε becomes smaller, this forces f (x) to be
closer to f (a). That we can always find a positive distance δ to work is what we mean when we say that we can make f (x) as
close to f (a) as we wish, provided we get x close enough to a . The sequence of pictures below illustrates that the phrase “for any
ε > 0 , there exists a δ > 0 such that if |x − a| < δ then |f (x) − f (a)| < ε ” can be replaced by the equivalent formulation “for

any ε > 0 , there exists a δ > 0 such that if a − δ < x < a + δ then f (a) − ε < f (x) < f (a) + ε .” This could also be replaced
by the phrase “for any ε > 0 , there exists a δ > 0 such that if x ∈ (a − δ, a + δ) then f (x) ∈ (f (a) − ε, f (a) + ε ).” All of these
equivalent formulations convey the idea that we can get f (x) to within ε of f (a), provided we make x within δ of a , and we will
use whichever formulation suits our needs in a particular application.

6.1.4 https://math.libretexts.org/@go/page/7951
Figure 6.1.5 : Function \(f\) is continuous at a .
The precision of the definition is what allows us to examine continuity without relying on pictures or vague notions such as
“nearness” or “getting closer to.” We will now consider some examples to illustrate this precision.

 Example 6.1.1:

Use the definition of continuity to show that f (x) = x is continuous at any point a .
If we were to draw the graph of this line, then you would likely say that this is obvious. The point behind the definition is that
we can back up your intuition in a rigorous manner.
Proof:
Let ε > 0 . Let δ = ε . If |x − a| < δ , then
|f (x) − f (a)| = |x − a| < ε (6.1.3)

Thus by the definition, f is continuous at a .

 Exercise 6.1.2

Use the definition of continuity to show that if m and b are fixed (but unspecified) real numbers then the function
f (x) = mx + b is continuous at every real number a .

 Example 6.1.2:

Use the definition of continuity to show that f (x) = x is continuous at a = 0 .


2

Proof:
Let ε > 0 . Let δ = √ε . If |x − 0| < δ , then |x| = √ε . Thus
2 2 2 2
∣x −0 ∣
∣ ∣ = |x| < (√ε) =ε (6.1.4)

Thus by the definition, f is continuous at 0.

Notice that in these proofs, the challenge of an ε > 0 was first given. This is because the choice of δ must depend upon ε . Also
notice that there was no explanation for our choice of δ . We just supplied it and showed that it worked. As long as δ > 0 , then this

6.1.5 https://math.libretexts.org/@go/page/7951
is all that is required. In point of fact, the δ we chose in each example was not the only choice that worked; any smaller δ would
work as well.

 Exercise 6.1.3
a. Given a particular ε > 0 in the definition of continuity, show that if a particular δ > 0 satisfies the definition, then any δ
0

with 0 < δ < δ will also work for this ε .


0

b. Show that if a δ can be found to satisfy the conditions of the definition of continuity for a particular ε > 0 , then this δ will
0

also work for any ε with 0 < ε < ε .


0

It wasn’t explicitly stated in the definition but when we say “if |x − a| < δ then |f (x) − f (a)| < ε ,” we should be restricting
ourselves to x values which are in the domain of the function f , otherwise f (x) doesn’t make sense. We didn’t put it in the
definition because that definition was complicated enough without this technicality. Also in the above examples, the functions were
defined everywhere so this was a moot point. We will continue with the convention that when we say “if |x − a| < δ then
|f (x) − f (a)| < ε ,” we will be restricting ourselves to x values which are in the domain of the function f . This will allow us to

examine continuity of functions not defined for all x without restating this restriction each time.

 Exercise 6.1.4

Use the definition of continuity to show that



√x if x ≥ 0
f (x) = { −−
− (6.1.5)
−√−x if x < 0

is continuous at a = 0 .

 Exercise 6.1.5

Use the definition of continuity to show that \f(x) = \sqrt{x}\) is continuous at a =0 . How is this problem different from
problem 6.1.4? How is it similar?

Sometimes the δ that will work for a particular ε is fairly obvious to see, especially after you’ve gained some experience. This is
the case in the above examples (at least after looking back at the proofs). However, the task of finding a δ to work is usually not so
obvious and requires some scrapwork. This scrapwork is vital toward producing a δ , but again is not part of the polished proof.
This can be seen in the following example.

 Example 6.1.3:
Use the definition of continuity to prove that f (x) = √−
x is continuous at a = 1 .

Scrapwork:
As before, the scrapwork for these problems often consists of simply working backwards. Specifically, given an ε > 0 , we

need to find a δ > 0 so that ∣∣√−x − √1∣ ∣ < ε , whenever |x − 1| < δ . We work backwards from what we want, keeping an eye

on the fact that we can control the size of |x − 1| .


− −
– ∣ (√x − 1)(√x + 1) ∣ |x − 1|

∣√x − √1∣ = ∣ ∣ = < |x − 1| (6.1.6)
− −
∣ √x + 1 ∣ √x + 1

This seems to suggest that we should make δ = ε . We’re now ready for the formal proof.
End of Scrapwork
Proof:
Let ε > 0 . Let δ = ε . If |x − 1| < δ , then |x − 1| < ε , and so
− −
– ∣ (√x − 1)(√x + 1) ∣ |x − 1|

∣√x − √1∣ = ∣ ∣ = < |x − 1| < ε (6.1.7)
− −
∣ √x + 1 ∣ √x + 1

6.1.6 https://math.libretexts.org/@go/page/7951
Thus by definition, f (x) = √−
x is continuous at 1 .

Bear in mind that someone reading the formal proof will not have seen the scrapwork, so the choice of δ might seem rather
mysterious. However, you are in no way bound to motivate this choice of δ and usually you should not, unless it is necessary for
the formal proof. All you have to do is find this δ and show that it works. Furthermore, to a trained reader, your ideas will come
through when you demonstrate that your choice of δ works.
Now reverse this last statement. As a trained reader, when you read the proof of a theorem it is your responsibility to find the
scrapwork, to see how the proof works and understand it fully.

Figure 6.1.6 : Paul Halmos.


As the renowned mathematical expositor Paul Halmos (1916-2006) said,
Don’t just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis
necessary? Is the converse true? What happens in the classical special case? What about the degenerate cases? Where does
the proof use the hypothesis?
This is the way to learn mathematics. It is really the only way.

 Exercise 6.1.6
Use the definition of continuity to show that f (x) = √−
x is continuous at any positive real number a .

 Exercise 6.1.7
a. Use a unit circle to show that for 0 ≤ θ < , sin θ ≤ θ and (1 − cos θ) ≤ θ and conclude | sin θ| ≤ |θ| and
π

|1 − cosθ| ≤ |θ| for − .


π π
<θ <
2 2

b. Use the definition of continuity to prove that f (x) = sin x is continuous at any point a .

Hint for (b)


sin x = sin(x − a + a)

 Exercise 6.1.8
a. Use the definition of continuity to show that f (x) = e is continuous at a = 0 .
x

b. Show that f (x) = e is continuous at any point a .


x

Hint for (b)

Rewrite e x
−e
a
as e a+(x−a)
−e
a
and use what you proved in part a

In the above problems, we used the definition of continuity to verify our intuition about the continuity of familiar functions. The
advantage of this analytic definition is that it can be applied when the function is not so intuitive. Consider, for example, the
function given at the end of the last chapter.

6.1.7 https://math.libretexts.org/@go/page/7951
1
x sin( ) if x ≠ 0
x
f (x) = { (6.1.8)
0 if x = 0

Near zero, the graph of f (x) looks like this:

Figure 6.1.7 : The graph of \(f(x)\).


As we mentioned in the previous chapter, since sin( ) oscillates infinitely often as x nears zero this graph must be viewed with a
1

certain amount of suspicion. However our completely analytic definition of continuity shows that this function is, in fact,
continuous at 0.

 Exercise 6.1.9
1
x sin( ) if x ≠ 0
Use the definition of continuity to show that f (x) = { x
is continuous at 0.
0 if x = 0

Even more perplexing is the function defined by

x if x is rational
D(x) = { (6.1.9)
0 if x is irrational

To the naked eye, the graph of this function looks like the lines y = 0 and y = x . Of course, such a graph would not be the graph of
a function. Actually, both of these lines have holes in them. Wherever there is a point on one line there is a “hole” on the other.
Each of these holes are the width of a single point (that is, their “width” is zero!) so they are invisible to the naked eye (or even
magnified under the most powerful microscope available). This idea is illustrated in the following graph

Figure 6.1.8 : Graph of the function D(x) as defined above.


Can such a function so “full of holes” actually be continuous anywhere? It turns out that we can use our definition to show that this
function is, in fact, continuous at 0 and at no other point.

6.1.8 https://math.libretexts.org/@go/page/7951
 Exercise 6.1.10
x if x is rational
a. Use the definition of continuity to show that the function D(x) = { is continuous at 0.
0 if x is irrational

b. Let a ≠ 0 . Use the definition of continuity to show that D is not continuous at a .

Hint for (b)

You might want to break this up into two cases where a is rational or irrational. Show that
no choice of δ > 0 will work for ε = |a| . Note that Theorem 1.1.2 of Chapter 1 will probably
help here.

Contributor
Eugene Boman (Pennsylvania State University) and Robert Rogers (SUNY Fredonia)

This page titled 6.1: An Analytic Denition of Continuity is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

6.1.9 https://math.libretexts.org/@go/page/7951
6.2: Sequences and Continuity
 Learning Objectives
Further explanation of sequences and continuity

There is an alternative way to prove that the function

x if x is rational
D(x) = { (6.2.1)
0 if x is irrational

is not continuous at a ≠ 0 . We will examine this by looking at the relationship between our definitions of convergence and continuity. The two
ideas are actually quite closely connected, as illustrated by the following very useful theorem.

 Theorem 6.2.1
The function f is continuous at a if and only if f satisfies the following property:

∀ sequences(xn ), if lim xn = a then lim f (xn ) = f (a) (6.2.2)


n→∞ n→∞

Theorem 6.2.1 says that in order for f to be continuous, it is necessary and sufficient that any sequence (x ) converging to a must force the
n

sequence (f (x )) to converge to f (a). A picture of this situation is below though, as always, the formal proof will not rely on the diagram.
n

Figure 6.2.1 : A picture for the situation described in Theorem 6.2.1.


This theorem is especially useful for showing that a function f is not continuous at a point a ; all we need to do is exhibit a sequence (x ) n

converging to a such that the sequence lim f (x ) does not converge to f (a). Let’s demonstrate this idea before we tackle the proof of Theorem
n
n→∞

6.2.1.

 Example 6.2.1:

Use Theorem 6.2.1 to prove that


|x|
if x ≠ 0
f (x) = { x

0 if x = 0

is not continuous at 0.
Proof:
First notice that f can be written as

⎧1 if x > 0

f (x) = ⎨ −1 if x < 0

0 if x = 0

To show that f is not continuous at 0, all we need to do is create a single sequence (x )which converges to 0, but for which the sequence (
n

f (x )) does not converge to f (0) = 0 . For a function like this one, just about any sequence will do, but let’s use ( ), just because it is an old
1
n
n

familiar friend.
We have
1
lim =0
n→∞ n

but

6.2.1 https://math.libretexts.org/@go/page/7952
1
lim f ( ) = lim 1 = 1 ≠ 0 = f (0).
n→∞ n n→∞

Thus by Theorem 6.2.1, f is not continuous at 0.

 Exercise 6.2.1
Use Theorem 6.2.1 to show that
|x|
if x ≠ 0
f (x) = { x

a if x = 0

is not continuous at 0, no matter what value a is.

 Exercise 6.2.2

Use Theorem 6.2.1 to show that

x if x is rational
D(x) = {
0 if x is irrational

is not continuous at a ≠= 0 .

 Exercise 6.2.3: The topologist’s sine curve

The function T (x) = sin( 1

x
) is often called the topologist’s sine curve. Whereas sin x has roots at nπ, n ∈ Z and oscillates infinitely often as
x → ±∞ , T has roots at , n ∈ Z , n ≠ 0 , and oscillates infinitely often as x approaches zero. A rendition of the graph follows.
1

Figure 6.2.2 : Graph of T (x) as defined above.


Notice that T is not even defined at x = 0 . We can extend T to be defined at 0 by simply choosing a value for T (0) :
1
sin( ) if x ≠ 0
x
T (x) = {
b if x = 0

Use Theorem 6.2.1 to show that T is not continuous at 0, no matter what value is chosen for b .

Sketch of Proof:
We’ve seen how we can use Theorem 6.2.1, now we need to prove Theorem 6.2.1. The forward direction is fairly straightforward. So we assume
that f is continuous at a and start with a sequence (x ) which converges to a . What is left to show is that lim
n f (x ) = f (a) . If you write
n→∞ n

down the definitions of f being continuous at a , lim x = a , and lim


n→∞ n f (x ) = f (a) , you should be able to get from what you are
n→∞ n

assuming to what you want to conclude.


To prove the converse, it is convenient to prove its contrapositive. That is, we want to prove that if f is not continuous at a then we can construct a
sequence (x ) that converges to a but (f (x ))does not converge to f (a). First we need to recognize what it means for f to not be continuous at a .
n n

This says that somewhere there exists an ε > 0 , such that no choice of δ > 0 will work for this. That is, for any such δ , there will exist x, such that
|x − a| < δ , but |f (x) − f (a)| ≥ ε . With this in mind, if δ = 1 , then there will exist an x such that |x − a| < 1 , but |f (x ) − f (a)| ≥ ε .
1 1 1

Similarly, if δ = , then there will exist an x such that |x − a| < , but |f (x ) − f (a)| ≥ ε . If we continue in this fashion, we will create a
1

2
2 2
1

2
2

sequence (x ) such that |x − a| < , but |f (x ) − f (a)| ≥ ε . This should do the trick.
n n
1

n
n

6.2.2 https://math.libretexts.org/@go/page/7952
 Exercise 6.2.4
Turn the ideas of the previous two paragraphs into a formal proof of Theorem 6.2.1.

Theorem 6.2.1 is a very useful result. It is a bridge between the ideas of convergence and continuity so it allows us to bring all of the theory we
developed in Chapter 4 to bear on continuity questions. For example consider the following.

 Theorem 6.2.2

Suppose f and g are both continuous at a . Then f + g and f ⋅ g are continuous at a .

 Proof

We could use the definition of continuity to prove Theorem 6.2.2, but Theorem 6.2.1 makes our job much easier.
For example, to show that f + g is continuous, consider any sequence (x ) which converges to a . Since f is
n

continuous at a , then by Theorem 6.2.1, lim f(x ) = f(a) . Likewise, since g is continuous at a , then
n→∞ n

lim n→∞
g(x ) = g(a) .
n
By Theorem 4.2.1 of Chapter 4,
limn→∞ (f + g)(xn ) = limn→∞ (f(xn ) + g(xn )) = limn→∞ f(xn ) + limn→∞ g(xn ) = f(a) + g(a) = (f + g)(a)

. Thus by Theorem 6.2.1, f + g is continuous at a . The proof that f ⋅ g is continuous at a is similar.

 Exercise 6.2.5

Use Theorem 6.2.1 to show that if f and g are continuous at a , then f ⋅ g is continuous at a .

By employing Theorem 6.2.2 a finite number of times, we can see that a finite sum of continuous functions is continuous. That is, if f , f , . . . , f
1 2 n
n
are all continuous at a then ∑ f is continuous at a . But what about an infinite sum? Specifically, suppose f , f , f , . . . are all continuous at
j=1 j 1 2 3

a . Consider the following argument.

Let ε > 0 . Since f is continuous at a , then there exists δ


j j >0 such that if |x − a| < δ , then |f
j j (x) − fj (a)| <
ε
j
. Let δ = min(δ
1, δ2 , . . . ) . If
2

|x − a| < δ ,then
∣ ∞ ∞ ∣ ∣ ∞ ∣ ∞ ∞
ε
∣ ∣ ∣ ∣
∑ fj (x) − ∑ fj (a) = ∑ (fj (x) − fj (a)) ≤ ∑ |(fj (x) − fj (a)| < ∑ =ε (6.2.3)
∣ ∣ ∣ ∣ j
∣ j=1 ∣ ∣ j=1 ∣ 2
j=1 j=1 j=1

Thus by definition, ∑ ∞

j=1
fj is continuous at a .
This argument seems to say that an infinite sum of continuous functions must be continuous (provided it converges). However we know that the
Fourier series
∞ k
4 (−1)
∑ cos((2k + 1)πx) (6.2.4)
π (2k + 1)
k=0

is a counterexample to this, as it is an infinite sum of continuous functions which does not converge to a continuous function. Something
fundamental seems to have gone wrong here. Can you tell what it is?
This is a question we will spend considerable time addressing in Chapter 8 so if you don’t see the difficulty, don’t worry; you will. In the meantime
keep this problem tucked away in your consciousness. It is, as we said, fundamental.
Theorem 6.2.1 will also handle quotients of continuous functions. There is however a small detail that needs to be addressed first. Obviously,
when we consider the continuity of f /g at a ,we need to assume that g(a) ≠ 0 . However, g may be zero at other values. How do we know that
when we choose our sequence (x ) converging to a that g(x ) is not zero? This would mess up our idea of using the corresponding theorem for
n n

sequences (Theorem 4.2.3 from Chapter 4). This can be handled with the following lemma.

 Lemma 6.2.1

If g is continuous at a and g(a) ≠ 0 , then there exists δ > 0 such that g(x) ≠ 0 for all x ∈ (a − δ, a + δ) .

6.2.3 https://math.libretexts.org/@go/page/7952
 Exercise 6.2.6

Prove Lemma 6.2.1.

Hint
g(a)
Consider the case where g(a) > 0 . Use the definition with ε = 2
. The picture is below; make it formal.

Figure 6.2.3: Picture for Lemma 6.2.1.


For the case g(a) < 0 , consider the function −g .

A consequence of this lemma is that if we start with a sequence (x ) converging to a , then for n sufficiently large, g(x
n n) ≠0 .

 Exercise 6.2.7
Use Theorem 6.2.1, to prove that if f and g are continuous at a and g(a) ≠ 0 , then f /g is continuous at a .

 theorem 6.2.3

Suppose f is continuous at a and g is continuous at f (a). Then g ∘ f is continuous at a . [Note that (g ∘ f )(x) = g(f (x)) .]

 Exercise 6.2.8

Prove Theorem 6.2.3.


a. Using the definition of continuity.
b. Using Theorem 6.2.1.

The above theorems allow us to build continuous functions from other continuous functions. For example, knowing that f (x) = x and g(x) = c

are continuous, we can conclude that any polynomial,


n n−1
p(x) = an x + an−1 x + ⋯ + a1 x + a0 (6.2.5)

is continuous as well. We also know that functions such as f (x) = sin(e x


) are continuous without having to rely on the definition.

 Exercise 6.2.9

Show that each of the following is a continuous function at every point in its domain.
a. Any polynomial.
b. Any rational function. (A rational function is defined to be a ratio of polynomials.)
c. cos x
d. The other trig functions: tan(x), cot(x), sec(x), csc(x)

6.2.4 https://math.libretexts.org/@go/page/7952
 Exercise 6.2.10
What allows us to conclude that f (x) = sin(e x
) is continuous at any point a without referring back to the definition of continuity?

Theorem 6.2.1 can also be used to study the convergence of sequences. For example, since f (x) = e
x
is continuous at any point and
n+1
( )
= 1 , then lim = e . This also illustrates a certain way of thinking about continuous functions. They are the ones where
n+1 n
limn→∞ e n→∞
n

we can “commute” the function and a limit of a sequence. Specifically, if f is continuous at a and lim f (x ) = f (a) = f (lim
n→∞ n x ) . n→∞ n

 Exercise 6.2.11

Compute the following limits. Be sure to point out how continuity is involved.

a. limn→∞ sin(
2n+1

)
−−−−
b. lim n→∞ √
n

n2 +1

1
(sin( ))
c. limn→∞ e
n

Having this rigorous formulation of continuity is necessary for proving the Extreme Value Theorem and the Mean Value Theorem. However there
is one more piece of the puzzle to address before we can prove these theorems.
We will do this in the next chapter, but before we go on it is time to define a fundamental concept that was probably one of the first you learned in
calculus: limits.

This page titled 6.2: Sequences and Continuity is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene Boman and
Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon
request.

6.2.5 https://math.libretexts.org/@go/page/7952
6.3: The Denition of the Limit of a Function
 Learning Objectives
Explain the Limit of a function

Since these days the limit concept is generally regarded as the starting point for calculus, you might think it is a little strange that
we’ve chosen to talk about continuity first. But historically, the formal definition of a limit came after the formal definition of
continuity. In some ways, the limit concept was part of a unification of all the ideas of calculus that were studied previously and,
subsequently, it became the basis for all ideas in calculus. For this reason it is logical to make it the first topic covered in a calculus
course.
To be sure, limits were always lurking in the background. In his attempts to justify his calculations, Newton used what he called his
doctrine of “Ultimate Ratios.” Specifically the ratio
2 2 2
(x + h ) −x 2xh + h
= = 2x + h (6.3.1)
h h

becomes the ultimate ratio 2x at the last instant of time before h - an “evanescent quantity” - vanishes. Similarly Leibniz’s
“infinitely small” differentials dx and dy can be seen as an attempt to get “arbitrarily close” to x and y , respectively. This is the
idea at the heart of calculus: to get arbitrarily close to, say, x without actually reaching it.
As we saw in Chapter 3, Lagrange tried to avoid the entire issue of “arbitrary closesness,” both in the limit and differential forms
when, in 1797, he attempted to found calculus on infinite series.
Although Lagrange’s efforts failed, they set the stage for Cauchy to provide a definition of derivative which in turn relied on his
precise formulation of a limit. Consider the following example: to determine the slope of the tangent line (derivative) of
sin x
f (x) = sin x at x = 0 . We consider the graph of the difference quotient D(x) = .
x

Figure 6.3.1 : Slope of the tangent line (derivative) of f (x) = sin x .


From the graph, it appears that D(0) = 1 but we must be careful. D(0) doesn’t even exist! Somehow we must convey the idea that
D(x) will approach 1 as x approaches 0 , even though the function is not defined at 0 . Cauchy’s idea was that the limit of D(x)

would equal 1 because we can make D(x) differ from 1 by as little as we wish.
Karl Weierstrass made these ideas precise in his lectures on analysis at the University of Berlin (1859-60) and provided us with our
modern formulation.

 Definition

We say lim f (x) = L provided that for each ε > 0 , there exists δ > 0 such that if 0 < |x − a| < δ then |f (x) − L| < ε .
x→a

Before we delve into this, notice that it is very similar to the definition of the continuity of f (x) at x = a . In fact we can readily see
that f is continuous at x = a if and only if lim f (x) = f (a) .
x→a

There are two differences between this definition and the definition of continuity and they are related. The first is that we replace
the value f (a) with L. This is because the function may not be defined at a . In a sense the limiting value L is the value f would

6.3.1 https://math.libretexts.org/@go/page/7953
have if it were defined and continuous at a . The second is that we have replaced
|x − a| < δ (6.3.2)

with
0 < |x − a| < δ (6.3.3)

Again, since f needn’t be defined at a , we will not even consider what happens when x =a . This is the only purpose for this
change.
As with the definition of the limit of a sequence, this definition does not determine what L is, it only verifies that your guess for the
value of the limit is correct.
Finally, a few comments on the differences and similiarities between this limit and the limit of a sequence are in order, if for no
other reason than because we use the same notation (lim) for both.
When we were working with sequences in Chapter 4 and wrote things like lim an we were thinking of n as an integer that got
n→∞

bigger and bigger. To put that more mathematically, the limit parameter n was taken from the set of positive integers, or n ∈ N .
For both continuity and the limit of a function we write things like lim f (x) and think of x as a variable that gets arbitrarily close
x→a

to the number a . Again, to be more mathematical in our language we would say that the limit parameter x is taken from the ...
Well, actually, this is interesting isn’t it? Do we need to take x from Q or from R? The requirement in both cases is simply that we
be able to choose x arbitrarily close to a. From Theorem 1.1.2 of Chapter 1 we see that this is possible whether x is rational or not,
so it seems either will work. This leads to the pardoxical sounding conclusion that we do not need a continuum (R) to have
continuity. This seems strange.
Before we look at the above example, let’s look at some algebraic examples to see the definition in use.

 Example 6.3.1:
2
x −1
Consider the function D(x) = ,x ≠1 . You probably recognize this as the difference quotient used to compute the
x −1
2
x −1
derivative of f (x) = x at x = 1 , so we strongly suspect that
2
lim =2 . Just as when we were dealing with limits of
x→1 x −1

sequences, we should be able to use the definition to verify this. And as before, we will start with some scrapwork.
Scrapwork:
2
∣ x −1 ∣
Let ε >0 . We wish to find a δ >0 such that if 0 < |x − 1| < δ then ∣ − 2∣ < ε . With this in mind, we perform the
∣ x −1 ∣

following calculations
2
∣ x −1 ∣
∣ − 2 ∣ = |(x + 1) − 2|
∣ x −1 ∣

= |x − 1|

Now we have a handle on δ that will work in the definition and we’ll give the formal proof that
2
x −1
lim =2
x→1 x −1

End of Scrapwork.
Proof:
Let ε > 0 and let δ = ε . If 0 < |x − 1| < δ , then
∣ x2 − 1 ∣
∣ − 2 ∣ = |(x + 1) − 2|
∣ x −1 ∣

= |x − 1| < δ = ε

6.3.2 https://math.libretexts.org/@go/page/7953
As in our previous work with sequences and continuity, notice that the scrapwork is not part of the formal proof (though it was
necessary to determine an appropriate δ ). Also, notice that 0 < |x − 1| was not really used except to ensure that x ≠ 1 .

 Exercise 6.3.1

Use the definition of a limit to verify that


2 2
x −a
lim = 2a. (6.3.4)
x→a x −a

 Exercise 6.3.2

Use the definition of a limit to verify each of the following limits.


3
x −1
a. lim = 3
x→1 x −1

√x − 1 1
b. lim =
x→1 x −1 2

Hint a
3
∣ x −1 ∣
2
∣ − 3∣ = ∣ ∣
∣x + x + 1 − 3∣
∣ x −1 ∣

2
≤ ∣
∣x − 1∣
∣ + |x − 1|

2
= ∣
∣(x − 1 + 1) − 1∣
∣ + |x − 1|

2
= ∣
∣(x − 1) + 2(x − 1)∣
∣ + |x − 1|

2
≤ |x − 1| + 3 |x − 1|

Hint b

∣ √x − 1 1∣ ∣ 1 1∣
∣ − ∣ =∣ − ∣

∣ x −1 2∣ ∣ √x + 1 2∣

∣ 2 − (√x + 1) ∣
=∣ ∣

∣ 2(√x ) + 1 ∣

∣ 1 −x ∣
=∣ ∣
− 2
∣ 2(1 + √x ) ∣

1
≤ |x − 1|
2

sin x
Let’s go back to the original problem: to show that lim =1 .
x→0 x

While rigorous, our definition of continuity is quite cumbersome. We really need to develop some tools we can use to show
continuity rigorously without having to refer directly to the definition. We have already seen in Theorem 6.2.1 one way to do this.
Here is another. The key is the observation we made after the definition of a limit:
f is continuous at x = a if and only if lim f (x) = f (a) (6.3.5)
x→a

Read another way, we could say that lim f (x) = L provided that if we redefine f (a) = L (or define f (a) = L in the case where
x→a

f (a) is not defined) then f becomes continuous at a . This allows us to use all of the machinery we proved about continuous
functions and limits of sequences.

6.3.3 https://math.libretexts.org/@go/page/7953
For example, the following corollary to Theorem 6.2.1 comes virtually for free once we’ve made the observation above.

 Corollary 6.3.1

lim f (x) = L if and only if f satisfies the following property:


x→a

∀ sequences (xn ), xn ≠ a, if lim xn = a then lim f (xn ) = L (6.3.6)


x→∞ x→∞

Armed with this, we can prove the following familiar limit theorems from calculus.

 Theorem 6.3.1

Suppose lim f (x) = L and lim g(x) = M , then


x→a x→a

a. lim(f (x) + g(x)) = L + M


x→a

b. lim(f (x) ⋅ g(x)) = L ⋅ M


x→a

f (x) L
c. lim ( ) = provided M ≠0 and g(x) ≠ 0 , for x sufficiently close to a (but not equal to a ).
x→a g(x) M

We will prove part (a) to give you a feel for this and let you prove parts (b) and (c).

 Proof

Let (x ) be a sequence such that


n xn ≠ a and lim xn = a . Since lim f (x) = L and lim g(x) = M we see that
n→∞ x→a x→a

lim f (xn ) = L and lim g(xn ) = M . By Theorem 4.2.1 of Chapter 4, we have lim f (xn ) + g(xn ) = L + M . Since {x }n
n→∞ n→∞ n→∞

was an arbitrary sequence with x n ≠a and lim xn = a we have


n→∞

lim f (x) + g(x) = L + M .


x→a

 Exercise 6.3.3

Prove parts (b) and (c) of Theorem 6.3.1.

More in line with our current needs, we have a reformulation of the Squeeze Theorem.

 Theorem 6.3.2: Squeeze Theorem for functions

Suppose f (x) ≤ g(x) ≤ h(x) , for x sufficiently close to a (but not equal to a ). If
lim f (x) = L = lim h(x) (6.3.7)
x→a x→a

then also
lim g(x) = L. (6.3.8)
x→a

 Exercise 6.3.4
Prove Theorem 6.3.2.

Hint

Use the Squeeze Theorem for sequences (Theorem 4.2.4) from Chapter 4.

6.3.4 https://math.libretexts.org/@go/page/7953
sin x sin x
Returning to lim we’ll see that the Squeeze Theorem is just what we need. First notice that since D(x) = is an even
x→0 x x
function, we only need to focus on x > 0 in our inequalities. Consider the unit circle.

Figure 6.3.2 : The Unit circle.

 Exercise 6.3.5
π sin x
Use the fact that area(ΔOAC ) < area(sectorOAC ) < area(ΔOAB) to show that if 0 < x < , then cos x < <1 .
2 x
π
Use the fact that all of these functions are even to extend the inequality for − <x <0 and use the Squeeze Theorem to
2
sin x
show lim =1 .
x→0 x

This page titled 6.3: The Denition of the Limit of a Function is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

6.3.5 https://math.libretexts.org/@go/page/7953
6.4: The Derivative - An Afterthought
 Learning Objectives
Explain the derivative

No, the derivative isn’t really an afterthought. Along with the integral it is, in fact, one of the most powerful and useful
mathematical objects ever devised and we’ve been working very hard to provide a solid, rigorous foundation for it. In that sense it
is a primary focus of our investigations.
On the other hand, now that we have built up all of the machinery we need to define and explore the concept of the derivative it
will appear rather pedestrian alongside ideas like the convergence of power series, Fourier series, and the bizarre properties of Q
and R.
You spent an entire semester learning about the properties of the derivative and how to use them to explore the properties of
functions so we will not repeat that effort here. Instead we will define it formally in terms of the ideas and techniques we’ve
developed thus far.

 The Derivative
Given a function f (x) defined on an interval (a, b) we define
f (x + h) − f (x)

f (x) = lim (6.4.1)
h→0 h

There are a few fairly obvious facts about this definition which are nevertheless worth noticing explicitly:
1. The derivative is defined at a point. If it is defined at every point in an interval (a, b) then we say that the derivative exists at
every point on the interval.
2. Since it is defined at a point it is at least theoretically possible for a function to be differentiable at a single point in its entire
domain.
3. Since it is defined as a limit and not all limits exist, functions are not necessarily differentiable.
4. Since it is defined as a limit, Corollary 6.3.1 applies. That is, f (x) exists if and only if ∀ sequences (h ), h ≠ 0 , if lim n→∞

n n

hn = 0 then
f (x + hn ) − f (x)

f (x) = lim (6.4.2)
n→∞ hn

Since lim n→∞ hn = 0 this could also be written as


f (x + hn ) − f (x)

f (x) = lim (6.4.3)
hn →0 hn

 theorem 6.4.1: Differentiability Implies Continuity


If f is differentiable at a point c then f is continuous at c as well.

 Exercise 6.4.1
Prove Theorem 6.4.1.

As we mentioned, the derivative is an extraordinarily useful mathematical tool but it is not our intention to learn to use it here. Our
purpose here is to define it rigorously (done) and to show that our formal definition does in fact recover the useful properties you
came to know and love in your calculus course.
The first such property is known as Fermat’s Theorem.

6.4.1 https://math.libretexts.org/@go/page/7954
 theorem 6.4.2: Fermat’s Theorem

Suppose f is differentiable in some interval (a, b) containing c . If f (c) ≥ f (x) for every x in (a, b), then f ′
(c) = 0 .

Proof:
f (c+hn )−f (c)
Since f ′
(c) exists we know that if (hn )

n=1
converges to zero then the sequence a n =
hn
converges to ′
f (c) . The proof
consists of showing that f (c) ≤ 0 and that
′ ′
f (c) ≥ 0 from which we conclude that ′
f (c) = 0 . We will only show the first part.
The second is left as an exercise.
Claim: f ′
(c) ≤ 0 .

Let n be sufficiently large that
0
1

n0
< b −c and take (h n) =(
1

n
)
n=n0
. Then f (c + 1

n
) − f (c) ≤ 0 and 1

n
>0 , so that

f (c + hn ) − f (c)
≤ 0, ∀n = n0 , n0 + 1, ⋯ (6.4.4)
hn

Therefore
f (c + hn ) − f (c)

f (c) = lim ≤0 (6.4.5)
hn →0 hn

 Exercise 6.4.2

Show that f ′
(c) ≥ 0 and conclude that f ′
(c) = 0 .

 Exercise 6.4.3

Show that if f (c) ≤ f (x) for all x in some interval (a, b) then f ′
(c) = 0 too.

Many of the most important properties of the derivative follow from what is called the Mean Value Theorem (MVT) which we now
state.

 Theorem 6.4.3: The Mean Value Theorem

Suppose f exists for every x ∈ (a, b) and f is continuous on [a, b]. Then there is a real number c ∈ (a, b) such that

f (b) − f (a)

f (c) = (6.4.6)
b −a

However, it would be difficult to prove the MVT right now. So we will first state and prove Rolle’s Theorem, which can be seen as
a special case of the MVT. The proof of the MVT will then follow easily.
Michel Rolle first stated the following theorem in 1691. Given this date and the nature of the theorem it would be reasonable to
suppose that Rolle was one of the early developers of calculus but this is not so. In fact, Rolle was disdainful of both Newton and
Leibniz’s versions of calculus, once deriding them as a collection of “ingenious fallacies.” It is a bit ironic that his theorem is so
fundamental to the modern development of the calculus he ridiculed.

 theorem 6.4.4: Rolle’s Theorem

Suppose f exists for every x ∈ (a, b), f is continuous on [a, b], and

f (a) = f (b) (6.4.7)

Then there is a real number c ∈ (a, b) such that



f (c) = 0 (6.4.8)

Proof:

6.4.2 https://math.libretexts.org/@go/page/7954
Since f is continuous on [a, b] we see, by the Extreme Value Theorem, that f has both a maximum and a minimum on .
[a, b]

Denote the maximum by M and the minimum by m. There are several cases:
Case 1:
f (a) = f (b) = M = m . In this case f (x) is constant (why?). Therefore f ′
(x) = 0 for every x ∈ (a, b).
Case 2:
f (a) = f (b) = M ≠ m . In this case there is a real number c ∈ (a, b) such that f (c) is a local minimum. By Fermat’s
Theorem, f ′
(c) = 0 .
Case 3:
f (a) = f (b) = m ≠ M . In this case there is a real number c ∈ (a, b) such that f (c) is a local maximum. By Fermat’s
Theorem, f ′
(c) = 0 .
Case 4:
is neither a maximum nor a minimum. In this case there is a real number c ∈ (a, b) such that f (c ) is a local
f (a) = f (b) 1 1

maximum, and a real number c ∈ (a, b) such that f (c ) is a local minimum. By Fermat’s Theorem, f (c ) = f (c ) = 0 .
2 2

1

2

With Rolle’s Theorem in hand we can prove the MVT which is really a corollary to Rolle’s Theorem or, more precisely, it is a
generalization of Rolle’s Theorem. To prove it we only need to find the right function to apply Rolle’s Theorem to. The following
figure 6.4.1 shows a function, f (x), cut by a secant line, L(x), from (a, f (a)) to (b, f (b)).

Figure 6.4.1 : Applying Rolle’s Theorem.


The vertical difference from f (x) to the secant line, indicated by φ(x) in the figure should do the trick. You take it from there.

 Exercise 6.4.4

Prove the Mean Value Theorem.

The Mean Value Theorem is extraordinarily useful. Almost all of the properties of the derivative that you used in calculus follow
more or less easily from it. For example the following is true.

 corollary 6.4.1

If f ′
(x) > 0 for every x in the interval (a, b) then for every c, d ∈ (a, b) where d > c we have
f (d) > f (c) (6.4.9)

That is, f is increasing on (a, b).

Proof:
Suppose c and d are as described in the corollary. Then by the Mean Value Theorem there is some number, say α ∈ (c, d) ⊆ (a, b)
such that
f (d) − f (c)

f (α) = (6.4.10)
d−c

6.4.3 https://math.libretexts.org/@go/page/7954
Since f ′
(α) > 0 and d − c > 0 we have f (d) − f (c) > 0 , or f (d) > f (c) .

 Exercise 6.4.5

Show that if f ′
(x) < 0 for every x in the interval (a, b) then f is decreasing on (a, b).

 Corollary 6.4.2

Suppose f is differentiable on some interval (a, b), f is continuous on (a, b), and that f (c) > 0 for some
′ ′
c ∈ (a, b) . Then
there is an interval, I ⊂ (a, b) , containing c such that for every x, y in I where x ≥ y, f (x) ≥ f (y).

 Exercise 6.4.6

Prove Corollary 6.4.2.

 Exercise 6.4.7

Show that if f is differentiable on some interval (a, b) and that f (c) < 0 for some′
c ∈ (a, b) then there is an interval,
I ⊂ (a, b) , containing c such that for every x, y in I where x ≤ y, f (x) ≤ f (y) .

This page titled 6.4: The Derivative - An Afterthought is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

6.4.4 https://math.libretexts.org/@go/page/7954
6.E: Continuity - What It Isn’t and What It Is (Exercises)
Q1
Use the definition of continuity to prove that the constant function g(x) = c is continuous at any point a.

Q2
a. Use the definition of continuity to prove that ln x is continuous at 1. [Hint: You may want to use the fact
|ln x| < ε ⇔ −ε < ln x < ε to find a δ .]

b. Use part (a) to prove that ln x is continuous at any positive real number a . [Hint: ln(x) = ln(x/a) + ln(a) . This is a
combination of functions which are continuous at a . Be sure to explain how you know that ln(x/a) is continuous at a .]

Q3
x if x ≠ 1
Write a formal definition of the statement f is not continuous at a , and use it to prove that the function f (x) = { is
0 if x = 1

not continuous at a = 1 .

This page titled 6.E: Continuity - What It Isn’t and What It Is (Exercises) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

6.E.1 https://math.libretexts.org/@go/page/7955
CHAPTER OVERVIEW

7: Intermediate and Extreme Values


7.1: Completeness of the Real Number System
7.2: Proof of the Intermediate Value Theorem
7.3: The Bolzano-Weierstrass Theorem
7.4: The Supremum and the Extreme Value Theorem
7.E: Intermediate and Extreme Values (Exercises)

Thumbnail: Bernard Bolzano. (Public Domain).

This page titled 7: Intermediate and Extreme Values is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

1
7.1: Completeness of the Real Number System
 Learning Objectives
Explain the Real number system

Recall that in deriving the Lagrange and Cauchy forms of the remainder for Taylor series, we made use of the Extreme Value
Theorem (EVT) and Intermediate Value Theorem (IVT). In Chapter 6, we produced an analytic definition of continuity that we can
use to prove these theorems. To provide the rest of the necessary tools we need to explore the make-up of the real number system.
To illustrate what we mean, suppose that we only used the rational number system. We could still use our definition of continuity
and could still consider continuous functions such as f (x) = x . Notice that 2 is a value that lies between f (1) = 1 and f (2) = 4 .
2

Figure 7.1.1 : Graph of the function f (x) = x . 2

The IVT says that somewhere between 1 and 2, f must take on the value 2. That is, there must exist some number c ∈ [1, 2] such

that f (c) = 2 . You might say, “Big deal! Everyone knows c = √2 works.”

However, we are only working with rational numbers and √2 is not rational. As we saw in Chapter 1 the rational number system
has holes in it, whereas the real number system doesn’t. Again, “Big deal! Let’s just say that the real number system contains
(square) roots.”
This sounds reasonable and it actually works for square roots, but consider the function f (x) = x − cos x . We know this is a
continuous function. We also know that f (0) = −1 and f ( ) = . According to the IVT, there should be some number c ∈ [0, ],
π

2
π

2
π

where f (c) = 0 . The graph is below.

Figure 7.1.2 : Graph of f (x) = x − cos x .


The situation is not as transparent as before. What would this mysterious c be where the curve crosses the x axis? Somehow we
need to convey the idea that the real number system is a continuum. That is, it has no “holes” in it.

7.1.1 https://math.libretexts.org/@go/page/7958
How about this? Why don’t we just say that it has no holes in it? Sometimes the simple answer works best! But not in this case.
How are we going to formulate a rigorous proof based on this statement? Just like with convergence and continuity, what we need
is a rigorous way to convey this idea that the real number system does not have any holes, that it is complete.
We will see that there are several different, but equivalent ways to convey this notion of completeness. We will explore some of
them in this chapter. For now we adopt the following as our Completeness Axiom for the real number system.

Nested Interval Property of the Real Number System (NIP)


Suppose we have two sequences of real numbers (x )and (y ) satisfying the following conditions:
n n

1. x ≤ x ≤ x ≤. . . [(x ) is non-decreasing]
1 2 3 n

2. y ≥ y ≥ y ≥. . . [(y ) is non-increasing]
1 2 3 n

3. ∀n, x ≤ yn n

4. lim (y − x ) = 0
n→∞ n n

Then there exists a unique number c such that x n ≤ c ≤ yn for all n . Geometrically, we have the following situation.

Figure 7.1.3 : Number line representation of x and y . n n

Notice that we have two sequences (x ) and (y ), one increasing (really non-decreasing) and one decreasing (non-increasing).
n n

These sequences do not pass each other. In fact, the following is true:

 Exercise 7.1.1

Let (x ),(y ) be sequences as in the NIP. Show that for all n , m ∈ N , x


n n n ≤ ym .

They are also coming together in the sense that lim (y n→∞ n − xn ) = 0 . The NIP says that in this case there is a unique real
number c in the middle of all of this [x ≤ c ≤ y for all n ].
n n

Figure 7.1.4 : Number line representation of c .


If there was no such c then there would be a hole where these two sequences come together. The NIP guarantees that there is no
such hole. We do not need to prove this since an axiom is, by definition, a self evident truth. We are taking it on faith that the real
number system obeys this law. The next problem shows that the completeness property distinguishes the real number system from
the rational number system.

 Exercise 7.1.2
a. Find two sequences of rational numbers (x )and (y ) which satisfy properties 1-4 of the NIP and such that there is no
n n

rational number c satisfying the conclusion of the NIP.

Hint
Consider the decimal expansion of an irrational number.

b. Find two sequences of rational numbers (x ) and (y ) which satisfy properties 1-4 of the NIP and such that there is a
n n

rational number c satisfying the conclusion of the NIP.

You might find the name Nested Interval Property to be somewhat curious. One way to think about this property is to consider that
we have a sequence of “nested closed intervals” [x , y ] ⊇ [x , y ] ⊇ [x , y ] ⊇ ⋅ ⋅ ⋅ whose lengths y − x are “shrinking to 0.”
1 1 2 2 3 3 n n

The conclusion is that the intersection of these intervals is non-empty and, in fact, consists of a single point. That is,
[ x , y ] = {c} .

⋂ n n
n=1

7.1.2 https://math.libretexts.org/@go/page/7958
 theorem 7.1.1

Suppose that we have two sequences (x ) and (y ) satisfying all of the assumptions of the Nested Interval Property. If c is the
n n

unique number such that x ≤ c ≤ y for all n , then lim


n n x = c and lim y =c.
n→∞ n n→∞ n

 Exercise 7.1.3

Prove Theorem7.1.1.

To illustrate the idea that the NIP “plugs the holes” in the real line, we will prove the existence of square roots of nonnegative real
numbers.

 theorem 7.1.2

Suppose a ∈ R , a ≥ 0 . There exists a real number c ≥ 0 such that c 2


=a .

Notice that we can’t just say, “Let c = √− −


a ,” since the idea is to show that this square root exists. In fact, throughout this proof, we

cannot really use a square root symbol as we haven’t yet proved that they (square roots) exist. We will give the idea behind the
proof as it illustrates how the NIP is used.
Sketch of Proof:
Our strategy is to construct two sequences which will “narrow in” on the number c that we seek. With that in mind, we need to find
a number x such that x ≤ a and a number y such that y ≥ a . (Remember that we can’t say √−
1
2
1 1
2
1
x

or √−y .) There are many

1 1

possibilities, but how about x = 0 and y = a + 1 ? You can check that these will satisfy x ≤ a ≤ y . Furthermore x ≤ y .
1 1
2
1 1
2
1 1

This is the starting point.


The technique we will employ is often called a bisection technique, and is a useful way to set ourselves up for applying the NIP. Let
m be the midpoint of the interval [ x , y ]. Then either we have m ≤ a or m ≥ a . In the case m ≤ a , we really want m to
2 2 2
1 1 1 1 1 1 1

take the place of x since it is larger than x , but still represents an underestimate for what would be the square root of a . This
1 1

thinking prompts the following move. If m ≤ a , we will relabel things by letting x = m and y = y . The situation looks like
2
1 2 1 2 1

this on the number line.

Figure 7.1.5 : Number line representation.


In the other case where a ≤m
2
1
, we will relabel things by letting x2 = x1 and y2 = m1 . The situation looks like this on the
number line.

Figure 7.1.6 : Number line representation.


In either case, we’ve reduced the length of the interval where the square root lies to half the size it was before. Stated in more
specific terms, in either case we have the same results:
2 2 2 2
x1 ≤ x2 ≤ y2 ≤ y1 ; x ≤a ≤y ; x ≤a ≤y (7.1.1)
1 1 2 2

and
1
y2 − x2 = (y1 − x1 ) (7.1.2)
2

Now we play the same game, but instead we start with the interval [x , y ]. Let m be the midpoint of [x , y ]. Then we have
2 2 2 2 2

m ≤ a or m ≥ a . If m ≤ a , we relabel x = m and y = y . If a ≤ m , we relabel x = x and y = m . In either case,


2 2 2 2
2 2 2 3 2 3 2 2 3 2 3 2

we end up with
2 2 2 2 2 2
x1 ≤ x2 ≤ x3 ≤ y3 ≤ y2 ≤ y1 ; x ≤a ≤y ; x ≤a ≤y ; x ≤a ≤y (7.1.3)
1 1 2 2 3 3

7.1.3 https://math.libretexts.org/@go/page/7958
and
1 1
y3 − x3 = (y2 − x2 ) = (y1 − x1 ) (7.1.4)
2
2 2

Continuing in this manner, we will produce two sequences, (x ) and (y ) satisfying the following conditions:
n n

1. x ≤ x − 2 ≤ x ≤. . .
1 3

2. y ≥ y ≥ y ≥. . .
1 2 3

3. ∀n , x ≤ y
n n

4. lim (y − x ) = lim
n→∞ n n (y − x ) = 0
n→∞ n−1
1
1 1
2

5. These sequences also satisfy the following property:


2 2
∀n, xn ≤ a ≤ yn (7.1.5)

Properties 1-4 tell us that (x )and (y ) satisfy all of the conditions of the NIP, so we can conclude that there must exist a real
n n

number c such that x ≤ c ≤ y for all n . At this point, you should be able to use property 5. to show that c = a as desired.
n n
2

 Exercise 7.1.4

Turn the above outline into a formal proof of Theorem 7.1.2.

The bisection method we employed in the proof of Theorem 7.1.2 is pretty typical of how we will use the NIP, as taking midpoints
ensures that we will create a sequence of “nested intervals.” We will employ this strategy in the proofs of the IVT and EVT.
Deciding how to relabel the endpoints of our intervals will be determined by what we want to do with these two sequences of real
numbers. This will typically lead to a fifth property, which will be crucial in proving that the c guaranteed by the NIP does what we
want it to do. Specifically, in the above example, we always wanted our candidate for √− −
a to be in the interval [ x , y ]. This n n

judicious choice led to the extra Property 5: ∀n , x ≤ a ≤ y . In applying the NIP to prove the IVT and EVT, we will find that
2
n
2
n

properties 1-4 will stay the same. Property 5 is what will change based on the property we want c to have.
Before we tackle the IVT and EVT, let’s use the NIP to address an interesting question about the Harmonic Series. Recall that the

Harmonic Series, 1 + + + + ⋯ , grows without bound, that is, ∑
1

2
1

3
1

4
= ∞ . The question is how slowly does this series
n=1
1

grow? For example, how many terms would it take before the series surpasses 100? 1000? 10000? Leonhard Euler decided to
tackle this problem in the following way. Euler decided to consider the lim ((1 + + +⋯ + ) − ln(n + 1)) . This
n→∞
1

2
1

3
1

limit is called Euler’s constant and is denoted by γ. This says that for n large, we have 1 + + + ⋯ + ≈ ln(n + 1) + γ . If 1

2
1

3
1

we could approximate γ, then we could replace the inequality 1 + + + ⋯ + ≥ 100 with the more tractable in-equality
1

2
1

3 n
1

ln(n + 1) + γ ≥ 0 and solve for n in this. This should tell us roughly how many terms would need to be added in the Harmonic

Series to surpass 100. Approximating γ with a computer is not too bad. We could make n as large as we wish in
(1 +
1

2
+
1
+⋯ +
3
1
) − ln(1 + n)
n
to make closer approximations for γ. The real issue is, HOW DO WE KNOW THAT
1 1 1
lim ((1 + + +⋯ + ) − ln(n + 1)) (7.1.6)
n→∞ 2 3 n

ACTUALLY EXISTS?
You might want to say that obviously it should, but let us point out that as of the printing of this book (2013), it is not even known
if γ is rational or irrational. So, in our opinion the existence of this limit is not so obvious. This is where the NIP will come into
play; we will use it to show that this limit, in fact, exists. The details are in the following problem.

 Exercise 7.1.5

The purpose of this problem is to show that lim n→∞ ((1 +


1

2
+
1

3
+⋯ +
1

n
) − ln(n + 1)) exists.
a. Let x n = (1 +
1

2
+
1

3
+⋯ +
1

n
) − ln(n + 1) . Use the following diagram to show x 1 ≤ x2 ≤ x3 ≤ ⋅ ⋅ ⋅

7.1.4 https://math.libretexts.org/@go/page/7958
Figure 7.1.7 : Diagram for exercise 7.1.5.

b. Let zn = ln(n + 1) − (
1

2
+
1

3
+⋯ +
1

n+1
) . Use a similar diagram to show that z 1 ≤ z2 ≤ z3 ≤ ⋅ ⋅ ⋅ .
c. Let y = 1 − z . Show that (x ) and (y ) satisfy the hypotheses of the nested interval property and use the NIP to
n n n n

conclude that there is a real number γ such that x ≤ γ ≤ y for all n . n n

d. Conclude that lim ((1 +


n→∞ + +⋯ +
1

2
1
) − ln(n + 1)) = γ
3
.1

 Exercise 7.1.6

Use the fact that x n ≤ γ ≤ yn for all n to approximate γ to three decimal places.

 Exercise 7.1.7
a. Use the fact that for large n , 1 + 1

2
+
1

3
+⋯ +
1

n
≈ ln(n + 1) + γ to determine approximately how large n must be to
make
1 1 1
1+ + +⋯ + ≥ 100 (7.1.7)
2 3 n

b. Suppose we have a supercomputer which can add 10 trillion terms of the Harmonic Series per second. Approximately how
many earth lifetimes would it take for this computer to sum the Harmonic Series until it surpasses 100?

This page titled 7.1: Completeness of the Real Number System is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

7.1.5 https://math.libretexts.org/@go/page/7958
7.2: Proof of the Intermediate Value Theorem
 Learning Objectives
Proof of Intermediate value theorem

We now have all of the tools to prove the Intermediate Value Theorem (IVT).

 Theorem 7.2.1: Intermediate Value Theorem

Suppose f (x) is continuous on [a, b] and v is any real number between f (a) and f (b) . Then there exists a real number
c ∈ [a, b] such that f (c) = v .

Sketch of Proof
We have two cases to consider: f (a) ≤ v ≤ f (b) and f (a) ≥ v ≥ f (b) .
We will look at the case f (a) ≤ v ≤ f (b) . Let x = a and y = b , so we have x ≤ y and f (x ) ≤ v ≤ f (y ) . Let m
1 1 1 1 1 1 1

be the midpoint of [x , y ] and notice that we have either f (m ) ≤ v or f (m ) ≥ v . If f (m ) ≤ v , then we relabel


1 1 1 1 1

x =m
2 and y = y . If f (m ) ≥ v , then we relabel x = x and y = m . In either case, we end up with
1 2 1 1 2 1 2 1

x ≤ x ≤ y ≤ y , y −x =
1 2 2 1 2 (y − x )
2
1

2
, f (x ) ≤ v ≤ f (y ) , and f (x ) ≤ v ≤ f (y ) .
1 1 1 1 2 2

Now play the same game with the interval [x , y ]. If we keep playing this game, we will generate two sequences (x ) and
2 2 n

(y ), satisfying all of the conditions of the nested interval property. These sequences will also satisfy the following extra
n

property: ∀n, f (x ) ≤ v ≤ f (y ) . By the NIP, there exists a c such that x ≤ c ≤ y , ∀n . This should be the c that we
n n n n

seek though this is not obvious. Specifically, we need to show that f (c) = v . This should be where the continuity of f at c
and the extra property on (x )and (y ) come into play.
n n

 Exercise 7.2.1
Turn the ideas of the previous paragraphs into a formal proof of the IVT for the case f (a) ≤ v ≤ f (b) .

 Exercise 7.2.2
We can modify the proof of the case f (a) ≤ v ≤ f (b) into a proof of the IVT for the case f (a) ≥ v ≥ f (b) . However, there is
a sneakier way to prove this case by applying the IVT to the function −f . Do this to prove the IVT for the case
f (a) ≥ v ≥ f (b) .

 Exercise 7.2.3
Use the IVT to prove that any polynomial of odd degree must have a real root.

This page titled 7.2: Proof of the Intermediate Value Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

7.2.1 https://math.libretexts.org/@go/page/7959
7.3: The Bolzano-Weierstrass Theorem
 Learning Objectives
Explain the Bolzano-Weierstrass Theorem

Once we introduced the Nested Interval Property, the Intermediate Value Theorem followed pretty readily. The proof of Extreme
Value (which says that any continuous function f defined on a closed interval [a, b] must have a maximum and a minimum) takes a
bit more work. First we need to show that such a function is bounded.

 Theorem 7.3.1

A continuous function defined on a closed, bounded interval must be bounded. That is, let f be a continuous function defined
on [a, b]. Then there exists a positive real number B such that |f (x)| ≤ B for all x ∈ [a, b].

Sketch of Alleged Proof


Let’s assume, for contradiction, that there is no such bound B. This says that for any positive integer n , there must exist
x ∈ [a, b] such that |f (x )| > n . (Otherwise n would be a bound for f .) IF the sequence (x ) converged to something in
n n n

[a, b], say c , then we would have our contradiction. Indeed, we would have lim x = c . By the continuity of f at c and
n→∞ n

Theorem 6.2.1 of Chapter 6, we would have lim f (x ) = f (c) . This would say that the sequence (f (x ) ) converges,
n→∞ n n

so by Lemma 4.2.2 of Chapter 4, it must be bounded. This would provide our contradiction, as we had |f (x )| > n , for all n

positive integers n .

This would all work well except for one little problem. The way it was constructed, there is no reason to expect the sequence (x ) n

to converge to anything and we can’t make such an assumption. That is why we emphasized the IF above. Fortunately, this idea can
be salvaged. While it is true that the sequence (x ) may not converge, part of it will. We will need the following definition.
n

 Definition
Let (n ∞
k ) k=1be a strictly increasing sequence of positive integers; that is, n 1 < n2 < n3 < ⋅ ⋅ ⋅ . If (x ∞
n ) k=1 is a sequence, then
, ⋯) is called a subsequence of (x ).

(xn ) = (x ,x n1 ,x n2 n3 n
k k=1

The idea is that a subsequence of a sequence is a part of the sequence, (x ), which is itself a sequence. However, it is a little more
n

restrictive. We can choose any term in our sequence to be part of the subsequence, but once we choose that term, we can’t go
backwards. This is where the condition n < n < n < ⋅ ⋅ ⋅ comes in. For example, suppose we started our subsequence with the
1 2 3

term x . We could not choose our next term to be x . The subscript of the next term would have to be greater than 100. In fact,
100 99

the thing about a subsequence is that it is all in the subscripts; we are really choosing a subsequence (n ) of the sequence of k

subscripts (n ) in (x ). n

 Example 7.3.1:

Given the sequence (x ), the following are subsequences.


n

1. (x 2, x4 , x6 , . . . ) = (x2k )

k=1

2. (x 1, x4 , x9 , . . . ) = (x
k
2

)
k=1

3. (x n) itself.

 Example 7.3.2:
1. (x 1, x1 , x1 , . . . )

2. (x 99 , x100 , x99 , . . . )

3. (x 1, x2 , x3 , . . . )

The subscripts in the examples we have seen so far have a discernable pattern, but this need not be the case. For example,

7.3.1 https://math.libretexts.org/@go/page/7960
(x2 , x5 , x12 , x14 , x23 . . . ) (7.3.1)

would be a subsequence as long as the subscripts form an increasing sequence themselves.

 Exercise 7.3.1

Suppose lim n→∞ xn = c . Prove that lim k→∞ xnk = c for any subsequence (x ) of (x ). nk n

Hint
nk ≥ k

A very important theorem about subsequences was introduced by Bernhard Bolzano and, later, independently proven by Karl
Weierstrass. Basically, this theorem says that any bounded sequence of real numbers has a convergent subsequence.

 Theorem 7.3.2: THE Bolzano-Weierstrass THEOREM


Let (x ) be a sequence of real numbers such that x ∈ [a, b] , ∀n . Then there exists
n n c ∈ [a, b]

and a subsequence (x ), such that \(\lim_{k \to \infty } x_{n_k} = c\).


nk

Sketch of Proof

Suppose we have our sequence (x ) such that x ∈ [a, b] , ∀n. To find our c for the n n

subsequence to converge to we will use the NIP. Since we are already using (x ) as our n

original sequence, we will need to use different letters in setting ourselves up for the NIP.
With this in mind, let a = a and b = b , and notice that x ∈ [a , b ] for infinitely many n .
1 1 n 1 1

(This is, in fact true for all n, but you’ll see why we said it the way we did.) Let m be the 1

midpoint of [a , b ] and notice that either x ∈ [a , m ] for infinitely many n or x ∈ [m , b ]


1 1 n 1 1 n 1 1

for infinitely many n . If x ∈ [a , m ] for infinitely many n , then we relabel a = a and


n 1 1 2 1

b = m . If x ∈ [ m , b ] for infinitely many n , then relabel a = m


2 1 n 1 1 and b = b . In either 2 1 2 1

case, we get a ≤ a ≤ b ≤ b , b − a = (b − a ) , and x ∈ [a , b ] for infinitely many n .


1 2 2 1 2 2
1

2
1 1 n 2 2

Now we consider the interval [a , b ] and let m be the midpoint of [a , b ]. Since 2 2 2 2 2

x ∈ [ a , b ] for infinitely many n , then either x ∈ [ a , m ] for infinitely many n


n 2 2 or n 2 2

x ∈ [ m , b ] for infinitely many n . If x ∈ [ a , m ] for infinitely many n , then we relabel


n 2 2 n 2 2

a = a
3 and b = m . If x ∈ [m , b ] for infinitely many n , then we relabel a = m and
2 3 2 n 2 2 3 2

b = b . In either case, we get a ≤ a ≤ a ≤ b ≤ b ≤ b , b − a = (b − a ) = , and 1 1


3 2 (b − a )
1 2 3 3 2 1 3 3 2 2 2 1 1
2 2

x ∈ [ a , b ] for infinitely many n .


n 3 3

If we continue in this manner, we will produce two sequences (a ) and (b ) with the k k

following properties:
1. a ≤ a ≤ a ≤ ⋅ ⋅ ⋅
1 2 3

2. b ≥ b ≥ b ≥ ⋅ ⋅ ⋅
1 2 3

3. ∀k , a ≤ b k k

4. lim (b − a ) = lim
k→∞ k k k→∞
k−1
1
(b1 − a1 ) = 0
2

5. For each k , x n
∈ [ ak , bk ] for infinitely many n

7.3.2 https://math.libretexts.org/@go/page/7960
By properties 1-5 and the NIP, there exists a unique c such that c ∈ [ ak , bk ] ,for all k . In
particular, c ∈ [a , b ] = [a, b] .
1 1

We have our c. Now we need to construct a subsequence converging to it. Since


x ∈ [ a , b ] for infinitely many n , choose an integer n
n 1 1
such that x ∈ [a , b ] . Since
1 n1 1 1

x ∈ [ a , b ] for infinitely many n , choose an integer n > n


n 2 2
such that x ∈ [a , b ] .
2 1 n2 2 2

(Notice that to make a subsequence it is crucial that n > n , and this is why we needed
2 1

to insist that x ∈ [a , b ] for infinitely many n .) Continuing in this manner, we should be


n 2 2

able to build a subsequence (x ) that will converge to c.


nk

As an example of this theorem, consider the sequence


n
((−1 ) ) = (−1, 1, −1, 1, . . . ) (7.3.2)

This sequence does not converge, but the subsequence


2k
((−1 ) ) = (1, 1, 1, . . . ) (7.3.3)

converges to −1. Notice that if the sequence is unbounded, then all bets are off; the sequence may have a
(−1, −1, −1, . . . )

convergent subsequence or it may not. The sequences (((−1) + 1)n) and (n ) represent these possibilities as the first has, for
n

example, (((−1) + 1)(2k + 1)) = (0, 0, 0, ⋯) and the second one has none.
2k+1

The Bolzano-Weierstrass Theorem says that no matter how “random” the sequence (x ) may be, as long as it is bounded then some
n

part of it must converge. This is very useful when one has some process which produces a “random” sequence such as what we had
in the idea of the alleged proof in Theorem 7.3.1.

 Exercise 7.3.2

Turn the ideas of the above outline into a formal proof of the Bolzano-Weierstrass Theorem.

 Exercise 7.3.3

Use the Bolzano-Weierstrass Theorem to complete the proof of Theorem 7.3.1.

This page titled 7.3: The Bolzano-Weierstrass Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

7.3.3 https://math.libretexts.org/@go/page/7960
7.4: The Supremum and the Extreme Value Theorem
 Learning Objectives
Explain supremum and the extreme value theorem

Theorem 7.3.1 says that a continuous function on a closed, bounded interval must be bounded. Boundedness, in and of itself, does
not ensure the existence of a maximum or minimum. We must also have a closed, bounded interval. To illustrate this, consider the
continuous function f (x) = tan x defined on the (unbounded) interval (−∞, ∞).
−1

Figure 7.4.1 : Graph of f (x) = tan −1


x .
This function is bounded between − and , but it does not attain a maximum or minimum as the lines y = ± are horizontal
π

2
π

2
π

asymptotes. Notice that if we restricted the domain to a closed, bounded interval then it would attain its extreme values on that
interval (as guaranteed by the EVT).
To find a maximum we need to find the smallest possible upper bound for the range of the function. This prompts the following
definitions.

 Definition: 7.4.1
Let S ⊆ R and let b be a real number. We say that b is an upper bound of S provided b ≥ x for all x ∈ S .

For example, if S = (0, 1) , then any b with b ≥ 1 would be an upper bound of S . Furthermore, the fact that b is not an element of
the set S is immaterial. Indeed, if T = [0, 1], then any b with b ≥ 1 would still be an upper bound of T . Notice that, in general, if a
set has an upper bound, then it has infinitely many since any number larger than that upper bound would also be an upper bound.
However, there is something special about the smallest upper bound.

 Definition: 7.4.2

Let S ⊆ R and let b be a real number. We say that b is the least upper bound of S provided
i. b ≥ x for all x ∈ S . (b is an upper bound of S )
ii. If c ≥ x for all x ∈ S , then c ≥ b . (Any upper bound of S is at least as big as b )
In this case, we also say that b is the supremum of S and we write

b = sup(S) (7.4.1)

Notice that the definition really says that b is the smallest upper bound of S . Also notice that the second condition can be replaced
by its contrapositive so we can say that b = sup S if and only if
i. b ≥ x for all x ∈ S
ii. If c < b then there exists x ∈ S such that c < x
The second condition says that if a number c is less than b , then it can’t be an upper bound, so that b really is the smallest upper
bound.
Also notice that the supremum of the set may or may not be in the set itself. This is illustrated by the examples above as in both
cases, 1 = sup(0, 1) and 1 = sup[0, 1]. Obviously, a set which is not bounded above such as N = 1, 2, 3, . . . cannot have a

7.4.1 https://math.libretexts.org/@go/page/7961
supremum. However, for non-empty sets which are bounded above, we have the following.

 Theorem 7.4.1: The Least Upper Bound Property (LUBP)

Let S be a non-empty subset of R which is bounded above. Then S has a supremum.

Sketch of Proof
Since S ≠ ∅ , then there exists s ∈ S . Since S is bounded above then it has an upper bound, say b. We will set ourselves
up to use the Nested Interval Property. With this in mind, let x = s and y = b and notice that ∃x ∈ S such that x ≥ x
1 1 1

(namely, x itself) and ∀x ∈ S , y ≥ x . You probably guessed what’s coming next: let m be the midpoint of [x , y ].
1 1 1 1 1

Notice that either m ≥ x , ∀x ∈ S or ∃x ∈ S such that x ≥ m . In the former case, we relabel, letting x = x and
1 1 2 1

y =m .
2 1In the latter case, we let x = m and y = y . In either case, we end up with
2 1 2 1

x ≤ x ≤ y ≤ y , y −x =
1 2 2 1 (y − x )
2 2 , and ∃x ∈ S such that x ≥ x and ∀x ∈ S , y ≥ x . If we continue this
1

2
1 1 2 2

process, we end up with two sequences, (x ) and (y ), satisfying the following conditions:
n n

1. x1 ≤ x2 ≤ x3 ≤. . .

2. y1 ≥ y2 ≥ y3 ≥. . .

3. ∀n, xn ≤ yn

4. limn→∞ (yn − xn ) = limn→∞


n− 1
1
(y1 − x1 ) = 0
2

5. ∀n, ∃x ∈ S such that x ≥ x and ∀x ∈ S, y


n n ≥x

By properties 1-5 and the NIP there exists c such that x n ≤ c ≤ yn , ∀n . We will leave it to you to use property 5 to show
that c = sup S .

 Exercise 7.4.1

Complete the above ideas to provide a formal proof of Theorem 7.4.1.

Notice that we really used the fact that S was non-empty and bounded above in the proof of Theorem 7.4.1. This makes sense,
since a set which is not bounded above cannot possibly have a least upper bound. In fact, any real number is an upper bound of the
empty set so that the empty set would not have a least upper bound.
The following corollary to Theorem 7.4.1 can be very useful.

 Corollary 7.4.1

Let (x ) be a bounded, increasing sequence of real numbers. That is, x


n 1 ≤ x2 ≤ x3 ≤ ⋅ ⋅ ⋅ . Then (x ) converges to some real
n

number c .

 Exercise 7.4.2
Prove Corollary 7.4.1.

Hint
Let c = sup x n |n = 1, 2, 3, . . . . To show that lim n→∞ xn = c , let ϵ > 0 . Note that c − ϵ is not an upper bound. You take it
from here!

 Exercise 7.4.3
−−−−−−−−−−−−−−−− −
−−−−−−−−−−− −
−−−−−−

−−

Consider the following curious expression √2 + √2 + √2 + √⋯ . We will use Corollary 7.4.1 to show that this actually
converges to some real number. After we know it converges we can actually compute what it is. Of course to do so, we need to
define things a bit more precisely. With this in mind consider the following sequence (x ) defined as follows: n


x1 = √2 (7.4.2)

7.4.2 https://math.libretexts.org/@go/page/7961
−−−− −
xn+1 = √ 2 + xn (7.4.3)

a. Use induction to show that x < 2 for n = 1, 2, 3, . . . .


n

b. Use the result from part (a) to show that x < x


n for n = 1, 2, 3, . . .
n+1

c. From Corollary 7.4.1, we have that (x ) must converge to some number c . Use the fact that (x
n n+1 ) must converge to c as
well to compute what c must be.

We now have all the tools we need to tackle the Extreme Value Theorem.

 Theorem 7.4.2: Extreme Value Theorem (EVT)

Suppose f is continuous on [a, b]. Then there exists c , d ∈ [a, b] such that f (d) ≤ f (x) ≤ f (c) , for all x ∈ [a, b].

Sketch of Proof

We will first show that f attains its maximum. To this end, recall that Theorem Theorem 7.3.1 tells us that

f [a, b] = f (x)|x ∈ [a, b] is a bounded set. By the LUBP, f [a, b] must have a least upper bound which we will label s , so
that s = sup f [a, b] . This says that s ≥ f (x) ,for all x ∈ [a, b] . All we need to do now is find a c ∈ [a, b] with f (c) = s .
With this in mind, notice that since s = sup f [a, b] , then for any positive integer n , s − is not an upper bound of f [a, b].
1

Thus there exists x ∈ [a, b] with s − < f (x ) ≤ s . Now, by the Bolzano-Weierstrass Theorem, (x ) has a convergent
n
1

n n n

subsequence(x ) converging to some c ∈ [a, b]. Using the continuity of f at c, you should be able to show that f (c) = s .
nk

To find the minimum of f , find the maximum of −f .

 Exercise 7.4.4

Formalize the above ideas into a proof of Theorem 7.4.2.

Notice that we used the NIP to prove both the Bolzano-Weierstrass Theorem and the LUBP. This is really unavoidable, as it turns
out that all of those statements are equivalent in the sense that any one of them can be taken as the completeness axiom for the real
number system and the others proved as theorems. This is not uncommon in mathematics, as people tend to gravitate toward ideas
that suit the particular problem they are working on. In this case, people realized at some point that they needed some sort of
completeness property for the real number system to prove various theorems. Each individual’s formulation of completeness fit in
with his understanding of the problem at hand. Only in hindsight do we see that they were really talking about the same concept:
the completeness of the real number system. In point of fact, most modern textbooks use the LUBP as the axiom of completeness
and prove all other formulations as theorems. We will finish this section by showing that either the Bolzano-Weierstrass Theorem
or the LUBP can be used to prove the NIP. This says that they are all equivalent and that any one of them could be taken as the
completeness axiom.

 Exercise 7.4.5

Use the Bolzano-Weierstrass Theorem to prove the NIP. That is, assume that the Bolzano-Weierstrass Theorem holds and
suppose we have two sequences of real numbers, (x ) and (y ), satisfying:
n n

a. x ≤ x ≤ x ≤. . .
1 2 3

b. y ≥ y ≥ y ≥. . .
1 2 3

c. ∀n, x ≤ y
n n

d. lim (y − x ) = 0
n→∞ n n

Prove that there is a real number c such that x n ≤ c ≤ yn , for all n .

Since the Bolzano-Weierstrass Theorem and the Nested Interval Property are equivalent, it follows that the Bolzano-Weierstrass
Theorem will not work for the rational number system.

7.4.3 https://math.libretexts.org/@go/page/7961
 Exercise 7.4.6
Find a bounded sequence of rational numbers such that no subsequence of it converges to a rational number.

 Exercise 7.4.7

Use the Least Upper Bound Property to prove the Nested Interval Property. That is, assume that every non-empty subset of the
real numbers which is bounded above has a least upper bound; and suppose that we have two sequences of real numbers (x ) n

and (y ), satisfying:
n

a. x ≤ x ≤ x ≤. . .
1 2 3

b. y ≥ y ≥ y ≥. . .
1 2 3

c. ∀n, x ≤ y n n

d. lim (y − x ) = 0
n→∞ n n

Prove that there exists a real number c such that xn ≤ c ≤ yn , for all n . (Again, the c will, of necessity, be unique, but don’t
worry about that.)

Hint
Corollary 7.4.1might work well here.

 Exercise 7.4.8

Since the LUBP is equivalent to the NIP it does not hold for the rational number system. Demonstrate this by finding a non-
empty set of rational numbers which is bounded above, but whose supremum is an irrational number.

We have the machinery in place to clean up a matter that was introduced in Chapter 1. If you recall (or look back) we introduced
the Archimedean Property of the real number system. This property says that given any two positive real numbers a, b, there exists
a positive integer n with na > b . As we mentioned in Chapter 1, this was taken to be intuitively obvious. The analogy we used
there was to emptying an ocean b with a teaspoon a provided we are willing to use it enough times n . The completeness of the real
number system allows us to prove it as a formal theorem.

 Theorem 7.4.3: Archimedean Property of R

Given any positive real numbers a and b , there exists a positive integer n , such that na > b .

 Exercise 7.4.9

Prove Theorem 7.4.3.

Hint
Assume that there are positive real numbers a and b, such that na ≤ b, ∀n ∈ N . Then N would be bounded above by b/a.
Let s = sup(N ) and consider s − 1 .

Given what we’ve been doing, one might ask if the Archimedean Property is equivalent to the LUBP (and thus could be taken as an
axiom). The answer lies in the following problem.

 Exercise 7.4.10

Does Q satisfy the Archimedean Property and what does this have to do with the question of taking the Archimedean Property
as an axiom of completeness?

This page titled 7.4: The Supremum and the Extreme Value Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts

7.4.4 https://math.libretexts.org/@go/page/7961
platform; a detailed edit history is available upon request.

7.4.5 https://math.libretexts.org/@go/page/7961
7.E: Intermediate and Extreme Values (Exercises)
Q1
Mimic the definitions of an upper bound of a set and the least upper bound (supremum) of a set to give definitions for a lower
bound of a set and the greatest lower bound (infimum) of a set.
Note: The infimum of a set S is denoted by inf(S) .

Q2
Find the least upper bound (supremum) and greatest lower bound (infimum) of the following sets of real numbers, if they exist. (If
one does not exist then say so.)
a. S = { |n = 1, 2, 3, . . . }
1

b. T = {r|r is rational and r 2


< 2}

c. (−∞, 0) ∪ (1, ∞)
n
(−1)
d. R = { n
|n = 1, 2, 3, . . . }

e. (2, 3π] ∩ Q
f. The empty set ∅

Q3
Let S ⊆ R and let T = {−x|x ∈ S} .
a. Prove that b is an upper bound of S if and only if −b is a lower bound of T .
b. Prove that b = sup S if and only if −b = inf T .

This page titled 7.E: Intermediate and Extreme Values (Exercises) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

7.E.1 https://math.libretexts.org/@go/page/9028
CHAPTER OVERVIEW

8: Back to Power Series


8.1: Uniform Convergence
8.2: Uniform Convergence- Integrals and Derivatives
8.3: Radius of Convergence of a Power Series
8.4: Boundary Issues and Abel’s Theorem

Thumbnail: Niels Henrik Abel. (public domain).

This page titled 8: Back to Power Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene
Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

1
8.1: Uniform Convergence
 Learning Objectives
Explain uniform convergence

We have developed precise analytic definitions of the convergence of a sequence and continuity of a function and we have used
these to prove the EVT and IVT for a continuous function. We will now draw our attention back to the question that originally
motivated these definitions, “Why are Taylor series well behaved, but Fourier series are not necessarily?” More precisely, we
mentioned that whenever a power series converges then whatever it converged to was continuous. Moreover, if we differentiate or
integrate these series term by term then the resulting series will converge to the derivative or integral of the original series. This
was not always the case for Fourier series. For example consider the function
∞ k
4 (−1)
f (x) = (∑ cos((2k + 1)πx))
π 2k + 1
k=0

4 1 1
= (cos(πx) − cos(2πx) + (5πx) − ⋯)
π 3 5

We have seen that the graph of f is given by

Figure 8.1.1 : Graph of f .


If we consider the following sequence of functions
4 π
f1 (x) = cos( x)
π 2

4 π 1 3π
f2 (x) = (cos( x) − cos( x))
π 2 3 2

4 π 1 3π 1 5π
f3 (x) = (cos( x) − cos( x) + cos( x))
π 2 3 2 5 2

we see the sequence of continuous functions (f ) converges to the non-continuous function f for each real number x. This didn’t
n

happen with Taylor series. The partial sums for a Taylor series were polynomials and hence continuous but what they converged to
was continuous as well.
The difficulty is quite delicate and it took mathematicians a while to determine the problem. There are two very subtly different
ways that a sequence of functions can converge: pointwise or uniformly. This distinction was touched upon by Niels Henrik Abel
(1802-1829) in 1826 while studying the domain of convergence of a power series. However, the necessary formal definitions were
not made explicit until Weierstrass did it in his 1841 paper Zur Theorie der Potenzreihen (On the Theory of Power Series). This
was published in his collected works in 1894.
It will be instructive to take a look at an argument that doesn’t quite work before looking at the formal definitions we will need. In
1821 Augustin Cauchy “proved” that the infinite sum of continuous functions is continuous. Of course, it is obvious (to us) that this

8.1.1 https://math.libretexts.org/@go/page/7972
is not true because we’ve seen several counterexamples. But Cauchy, who was a first rate mathematician was so sure of the
correctness of his argument that he included it in his textbook on analysis, Cours d’analyse (1821).

 Exercise 8.1.1

Find the aw in the following “proof” that f is also continuous at a .


Suppose f 1, f2 , f3 , f4 . . . are all continuous at a and that ∑ ∞

n=1
fn = f . Let ε > 0 . Since f is continuous at a , we can choose
n

ε
δn > 0 such that if |x − a| < δ , then |fn n (x) − fn (a)| < . Let δ = inf(δ 1, δ2 , δ3 , . . . ) . If |x − a| < δ then
2n

∞ ∞
∣ ∣
|f (x) − f (a)| = ∣∑ fn (x) − ∑ fn (a)∣
∣ n=1 n=1 ∣


∣ ∣
= ∣∑ (fn (x) − fn (a))∣
∣ n=1 ∣

≤ ∑ | fn (x) − fn (a)|

n=1


ε
≤∑
n
2
n=1


1
≤ ε∑
n
2
n=1

Thus f is continuous at a .

 Definition

Let S be a subset of the real number system and let (f ) = (f , f , f , . . . ) be a sequence of functions defined on S . Let f be
n 1 2 3

a function defined on S as well. We say that (f ) converges to f pointwise on S provided that for all x ∈ S , the sequence of
n
ptwise

real numbers (f n (x) ) converges to the number f (x). In this case we write f n −−−→f on S .

ptwise

Symbolically, we have f n −−−→f on S ⇔ ∀x ∈ S, ∀ε > 0, ∃N such that (n > N ⇒ | fn (x) − f (x)| < ε) .
This is the type of convergence we have been observing to this point. By contrast we have the following new definition.

 Definition

Let S be a subset of the real number system and let (f ) = (f , f , f , . . . ) be a sequence of functions defined on S . Let f be
n 1 2 3

a function defined on S as well. We say that (f ) converges to f uniformly on S provided ∀ε > 0, ∃N such that
n

n > N ⇒ | f (x) − f (x)| < ε, ∀x ∈ S .


n

unif

In this case we write f n −−→f on S .

The difference between these two definitions is subtle. In pointwise convergence, we are given a fixed x ∈ S and an ε > 0 . Then
the task is to find an N that works for that particular x and ε . In uniform convergence, one is given ε > 0 and must find a single N
that works for that particular ε but also simultaneously (uniformly) for all x ∈ S . Clearly uniform convergence implies pointwise
convergence as an N which works uniformly for all x, works for each individual x also. However the reverse is not true. This will
become evident, but first consider the following example.

 Exercise 8.1.2

Let 0 < b < 1 and consider the sequence of functions (f ) defined on n [0, b] by f n (x) = xn . Use the definition to show that
unif

fn −−→0 on [0, b].

8.1.2 https://math.libretexts.org/@go/page/7972
Hint
n n n
|x − 0| = x ≤b

Uniform convergence is not only dependent on the sequence of functions but also on the set S . For example, the sequence
(f (x)) = (x )
n
n ∞

n=0
of Problem 8.1.2 does not converge uniformly on [0, 1]. We could use the negation of the definition to prove
this, but instead, it will be a consequence of the following theorem.

 Theorem 8.1.1
unif

Consider a sequence of functions (f ) which are all continuous on an interval I . Suppose


n fn −−→f on I . Then f must be
continuous on I .

Sketch of Proof
Let a ∈ I and let ε > 0 . The idea is to use uniform convergence to replace f with one of the known continuous functions
f . Specifically, by uncancelling, we can write
n

|f (x) − f (a)| = |f (x) − fn (x) + fn (x) − fn (a) + fn (a) − f (a)|

≤ |f (x) − fn (x)| + | fn (x) − fn (a)| + | fn (a) − f (a)|

If we choose n large enough, then we can make the first and last terms as small as we wish, noting that the uniform
convergence makes the first term uniformly small for all x . Once we have a specific n , then we can use the continuity of
fn to find a δ > 0 such that the middle term is small whenever x is within δ of a .

 Exercise 8.1.3

Provide a formal proof of Theorem 8.1.1 based on the above ideas.

 Exercise 8.1.4

Consider the sequence of functions (f ) defined on [0, 1] by f


n n (x) =x
n
. Show that the sequence converges to the function

0 if x ϵ [0, 1)
f (x) = {
1 if x = 1

pointwise on [0, 1], but not uniformly on [0, 1].

Notice that for the Fourier series at the beginning of this chapter,
4 π 1 3π 1 5π 1 7π
f (x) = (cos( x) − cos( x) + cos( x) − cos( x) + ⋯) (8.1.1)
π 2 3 2 5 2 7 2

the convergence cannot be uniform on (−∞, ∞), as the function f is not continuous. This never happens with a power series,
since they converge to continuous functions whenever they converge. We will also see that uniform convergence is what allows us
to integrate and differentiate a power series term by term.

This page titled 8.1: Uniform Convergence is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene
Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

8.1.3 https://math.libretexts.org/@go/page/7972
8.2: Uniform Convergence- Integrals and Derivatives
 Learning Objectives
Explain the convergence of integrals and derivatives
Cauchy sequences

We saw in the previous section that if (f ) is a sequence of continuous functions which converges uniformly to f on an interval,
n

then f must be continuous on the interval as well. This was not necessarily true if the convergence was only pointwise, as we saw a
sequence of continuous functions defined on (−∞, ∞) converging pointwise to a Fourier series that was not continuous on the real
line. Uniform convergence guarantees some other nice properties as well.

 Theorem 8.2.1
unif

Suppose f and f are integrable and f


n n −−→f on [a, b]. Then
b b

lim ∫ fn (x)dx = ∫ f (x)dx (8.2.1)


n→∞
x=a x=a

 Exercise 8.2.1
Prove Theorem 8.2.1.

Hint
For ε > 0 , we need to make |f n (x) − f (x)| <
b−a
ε
, for all x ∈ [a, b] .

Notice that this theorem is not true if the convergence is only pointwise, as illustrated by the following.

 Exercise 8.2.2
Consider the sequence of functions (f ) given by
n

1
n if x ϵ (0, )
n
fn (x) = { (8.2.2)
0 otherwise

ptwise
1 1
a. Show that f −−−→ 0 on [0, 1], but lim
n n→∞ ∫
x=0
fn (x)dx ≠ ∫
x=0
0dx .
b. Can the convergence be uniform? Explain.

Applying this result to power series we have the following.

 Corollary 8.2.1

converges uniformly1 to f on an interval containing 0 and x then ∫


x an
If ∑ ∞

n=0
an x
n

t=0

f (t)dt = ∑
n=1
(
n+1
n+1
x ) .

 Exercise 8.2.3

Prove Corollary 8.2.1.

Hint
Remember that
∞ N

∑ fn (x) = lim ∑ fn (x) (8.2.3)


N →∞
n=0 n=0

8.2.1 https://math.libretexts.org/@go/page/7973
Surprisingly, the issue of term-by-term differentiation depends not on the uniform convergence of (f ), but on the uniform n

convergence of (f ). More precisely, we have the following result.



n

 Theorem 8.2.2
ptwise unif

Suppose for every n ∈ N f is differentiable,


n

fn is continuous, fn −−−→f , and ′
fn −−→g on an interval, I . Then f is
differentiable and f = g on I .

 Exercise 8.2.4

Prove Theorem 8.2.2.

Hint
Let a be an arbitrary fixed point in I and let x ∈ I . By the Fundamental Theorem of Calculus, we have
x


∫ fn (t)dt = fn (x) − fn (a)
t=a

Take the limit of both sides and differentiate with respect to x .

As before, applying this to power series gives the following result.

 Corollary 8.2.2

If ∑

n=0
n
an xconverges pointwise to f on an interval containing 0 and x and ∑

n=1
an nx
n−1
converges uniformly on an

interval containing 0 and x, then f (x) = ∑

a nx
n=1 n . n−1

 Exercise 8.2.5

Prove Corollary 8.2.2.

The above results say that a power series can be differentiated and integrated term-by-term as long as the convergence is uniform.
Fortunately it is, in general, true that when a power series converges the convergence of it and its integrated and differentiated
series is also uniform (almost).
However we do not yet have all of the tools necessary to see this. To build these tools requires that we return briey to our study,
begun in Chapter 4, of the convergence of sequences.

Cauchy Sequences
Knowing that a sequence or a series converges and knowing what it converges to are typically two different matters. For example,
∞ ∞
we know that ∑ n=0
and ∑
1

n! n=0
both converge. The first converges to e , which has meaning in other contexts. We don’t
1

n!n!

know what the second one converges to, other than to say it converges to ∑ . In fact, that question might not have much
n=0 n!n!
1

meaning without some other context in which ∑ ∞

n=0
arises naturally. Be that as it may, we need to look at the convergence of a
n!n!
1

series (or a sequence for that matter) without necessarily knowing what it might converge to. We make the following definition.

 Definition 8.2.1: Cauchy Sequence


Let (s ) be a sequence of real numbers. We say that (s ) is a Cauchy sequence if for any ε > 0 , there exists a real number N
n n

such that if m, n > N , then |s − s | < ε .


m n

Notice that this definition says that the terms in a Cauchy sequence get arbitrarily close to each other and that there is no reference
to getting close to any particular fixed real number. Furthermore, you have already seen lots of examples of Cauchy sequences as
illustrated by the following result.

8.2.2 https://math.libretexts.org/@go/page/7973
 Theorem 8.2.3

Suppose (s ) is a sequence of real numbers which converges to s . Then (s ) is a Cauchy sequence.


n n

Intuitively, this result makes sense. If the terms in a sequence are getting arbitrarily close to s , then they should be getting
arbitrarily close to each other.2 This is the basis of the proof.

 Exercise 8.2.6

Prove Theorem 8.2.3.

Hint
| sm − sn | = | sm − s + s − sn | ≤ | sm − s| + |s − sn |

So any convergent sequence is automatically Cauchy. For the real number system, the converse is also true and, in fact, is
equivalent to any of our completeness axioms: the NIP, the Bolzano-Weierstrass Theorem, or the LUB Property. Thus, this could
have been taken as our completeness axiom and we could have used it to prove the others. One of the most convenient ways to
prove this converse is to use the Bolzano-Weierstrass Theorem. To do that, we must first show that a Cauchy sequence must be
bounded. This result is reminiscent of the fact that a convergent sequence is bounded (Lemma 4.2.2 of Chapter 4) and the proof is
very similar.

 Lemma 8.2.1: A Cauchy sequence is bounded

Suppose (s ) is a Cauchy sequence. Then there exists B > 0 such that |s


n n| ≤B for all n .

 Exercise 8.2.7

Prove Lemma 8.2.1

Hint
This is similar to Exercise 4.2.4 of Chapter 4. There exists N such that if m, n > N then |s n − sm | < 1 . Choose a fixed
m > N and let B = max (| s |, | s |, . . . , | s
1 2 |, | s | + 1).
⌈N ⌉ m

 Theorem 8.2.4: cauchy sequences converge

Suppose (s )is a Cauchy sequence of real numbers. There exists a real number s such that lim
n n→∞ sn = s .

Sketch of Proof
We know that (s ) is bounded, so by the Bolzano-Weierstrass Theorem, it has a convergent subsequence (s ) converging
n nk

to some real number s . We have |s − s| = |s − s + s − s| ≤ |s − s | + |s − s|


n n nk nk n nk nk. If we choose n and n large k

enough, we should be able to make each term arbitrarily small.

 Exercise 8.2.8

Provide a formal proof of Theorem 8.2.4.

From Theorem 8.2.3 we see that every Cauchy sequence converges in R. Moreover the proof of this fact depends on the Bolzano-
Weierstrass Theorem which, as we have seen, is equivalent to our completeness axiom, the Nested Interval Property. What this
means is that if there is a Cauchy sequence which does not converge then the NIP is not true. A natural question to ask is if every
Cauchy sequence converges does the NIP follow? That is, is the convergence of Cauchy sequences also equivalent to our
completeness axiom? The following theorem shows that the answer is yes.

8.2.3 https://math.libretexts.org/@go/page/7973
 Theorem 8.2.5
Suppose every Cauchy sequence converges. Then the Nested Interval Property is true.

 Exercise 8.2.9

Prove Theorem 8.2.5.

Hint
If we start with two sequences (x ) and (y ), satisfying all of the conditions of the NIP, you should be able to show that
n n

these are both Cauchy sequences.

Exercises 8.2.8 and 8.2.9 tell us that the following are equivalent: the Nested Interval Property, the Bolzano-Weierstrass Theorem,
the Least Upper Bound Property, and the convergence of Cauchy sequences. Thus any one of these could have been taken as the
completeness axiom of the real number system and then used to prove the each of the others as a theorem according to the
following dependency graph:

Figure 8.2.1 : Dependency graph.


Since we can get from any node on the graph to any other, simply by following the implications (indicated with arrows), any one of
these statements is logically equivalent to each of the others.

 Exercise 8.2.10

Since the convergence of Cauchy sequences can be taken as the completeness axiom for the real number system, it does not
hold for the rational number system. Give an example of a Cauchy sequence of rational numbers which does not converge to a
rational number.

If we apply the above ideas to series we obtain the following important result, which will provide the basis for our investigation of
power series.

 Theorem 8.2.6: Cauchy Criterion


The series ∑ ∞

k=0
ak converges if and only if ∀ε > 0, ∃N such that if m > n > N then ∣∣∑
m

k=n+1
ak ∣
∣ <ε .

 Exercise 8.2.11

Prove the Cauchy criterion.

At this point several of the tests for convergence that you probably learned in calculus are easily proved. For example:

 Exercise 8.2.12: The n th


Term Test

Show that if ∑ n=1
an converges then lim n→∞ an = 0 .

8.2.4 https://math.libretexts.org/@go/page/7973
 Exercise 8.2.13: The Strong Cauchy Criterion

Show that ∑ ∞

k=1
ak converges if and only if lim n→∞ ∑

k=n+1
ak = 0 .

Hint
The hardest part of this problem is recognizing that it is really about the limit of a sequence as in Chapter 4.

You may also recall the Comparison Test from studying series in calculus: suppose 0 ≤ a ≤ b , if ∑ b converges then ∑ a n n n n

converges. This result follows from the fact that the partial sums of ∑ a form an increasing sequence which is bounded above by
n

∑ b . (See Corollary 7.4.1 of Chapter 7.) The Cauchy Criterion allows us to extend this to the case where the terms an could be
n

negative as well. This can be seen in the following theorem.

 Theorem 8.2.7: Comparison Test

Suppose |a n| ≤ bn for all n . If ∑ b converges then ∑ a also converges.


n n

 Exercise 8.2.14

Prove Theorem 8.2.7.


Hint
m m
Use the Cauchy criterion with the fact that ∣∣∑ k=n+1
ak ∣
∣ ≤∑ k=n+1
| ak | .

The following definition is of marked importance in the study of series.

 Definition 8.2.2: Absolute Convergence

Given a series ∑ a , the series n ∑ | an | is called the absolute series of ∑ an and if ∑ | an | converges then we say that ∑ an

converges absolutely.

The significance of this definition comes from the following result.

 corollary 8.2.3

If ∑ a converges absolutely, then ∑ a converges.


n n

 Exercise 8.2.15

Show that Corollary 8.2.3 is a direct consequence of Theorem 8.2.7.

 Exercise 8.2.16

If ∑ ∞

n=0
| an | = s , then does it follow that s = |∑ ∞

n=0
an | ? Justify your answer. What can be said?

n
(−1)
The converse of Corollary 8.2.3 is not true as evidenced by the series ∑ . As we noted in Chapter 3, this series converges

n=0 n+1

to ln 2. However, its absolute series is the Harmonic Series which diverges. Any such series which converges, but not absolutely, is
said to converge conditionally. Recall also that in Chapter 3, we showed that we could rearrange the terms of the series
n n
(−1) (−1)

n=0 n+1
to make it converge to any number we wished. We noted further that all rearrangements of the series ∑

n=0 2
(n+1)

converged to the same value. The difference between the two series is that the latter converges absolutely whereas the former does
not. Specifically, we have the following result.

8.2.5 https://math.libretexts.org/@go/page/7973
 Theorem 8.2.8

Suppose ∑ a converges absolutely and let s = ∑


n

n=0
an . Then any rearrangement of ∑ a must converge to s .
n

Sketch of Proof
We will first show that this result is true in the case where an ≥ 0 . If ∑ bn represents a rearrangement of ∑ an , then
notice that the sequence of partial sums
n
(∑k=0 bk )

n=0
is an increasing sequence which is bounded by s . By Corollary
7.4.1 of Chapter 7, this sequence must converge to some number t and t ≤ s . Furthermore ∑ a is also a rearrangement n

| an |+an | an |−an
of ∑ b . Thus the result holds for this special case. (Why?) For the general case, notice that
n an =
2

2
and
| an |+an | an |−an
that ∑
2
and ∑
2
are both convergent series with nonnegative terms. By the special case
| bn |+bn | an |+an | bn |−bn | an |−an

2
=∑
2
and ∑ 2
=∑
2

 Exercise 8.2.17

Fill in the details and provide a formal proof of Theorem 8.2.8.

References
1
Notice that we must explicitly assume uniform convergence. This is because we have not yet proved that power series actually do
converge uniformly.
2 But the converse isn’t nearly as clear. In fact, it isn’t true in the rational numbers.

This page titled 8.2: Uniform Convergence- Integrals and Derivatives is shared under a CC BY-NC-SA 4.0 license and was authored, remixed,
and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

8.2.6 https://math.libretexts.org/@go/page/7973
8.3: Radius of Convergence of a Power Series
 Learning Objectives
Explain the radius of convergence of a power series

We’ve developed enough machinery to look at the convergence of power series. The fundamental result is the following theorem
due to Abel.

 Theorem 8.3.1
∞ ∞

Suppose ∑ an c
n
converges for some nonzero real number c . Then ∑ an x
n
converges absolutely for all x such that
n=0 n=0

|x| < |c| .

Sketch of Proof
To prove Theorem 8.3.1 first note that by limn→∞ an c
n
=0 . Thus (a nc
n
) is a bounded sequence. Let B be a bound:
|ac | ≤ B . Then
n

n c
∣ x ∣ ∣x∣
n n
| an x | = an c ⋅ ( ) ≤B (8.3.1)
∣ c ∣ ∣ n ∣

We can now use the comparison test.

 Exercise 8.3.1
Prove Theorem 8.3.1.

 Corollary 8.3.1
∞ ∞

Suppose ∑ a nc
n
diverges for some real number c . Then ∑ a nx
n
diverges for all x such that |x| > |c|.
n=0 n=0

 Exercise 8.3.2

Prove Corollary 8.3.1.

As a result of Theorem 8.3.1 and Corollary 8.3.1 , we have the following: either ∑ an x
n
converges absolutely for all x or there
n=0

exists some nonnegative real number r such that ∑ an x


n
converges absolutely when |x| < r and diverges when |x| > r . In the
n=0

latter case, we call r the radius of convergence of the power series ∑ an x


n
. In the former case, we say that the radius of
n=0
∞ ∞

convergence of ∑ an x
n
is ∞ . Though we can say that ∑ an x
n
converges absolutely when |x| < r, we cannot say that the
n=0 n=0

convergence is uniform. However, we can come close. We can show that the convergence is uniform for |x| ≤ b < r . To see this
we will use the following result.

 Theorem 8.3.2: The Weierstrass-M Test


Let (f )
n be a sequence of functions defined on

n=1
S ⊆R and suppose that (Mn )

n=1
is a sequence of nonnegative real
numbers such that
| fn (x)| ≤ Mn , ∀x ∈ S, n = 1, 2, 3, . . . . (8.3.2)

8.3.1 https://math.libretexts.org/@go/page/7975
∞ ∞

If ∑ M converges then ∑ f
n n (x) converges uniformly on S to some function (which we will denote by f (x)).
n=1 n=1

Sketch of Proof
Since the crucial feature of the theorem is the function f (x) that our series converges to, our plan of attack is to first define

f (x) and then show that our series, ∑ f n (x) , converges to it uniformly.
n=1

First observe that for any x ∈ S , ∑ fn (x) converges by the Comparison Test (in fact it converges absolutely) to some
n=1

number we will denote by f (x). This actually defines the function f (x) for all x ∈ S . It follows that ∑ f n (x) converges
n=1

pointwise to f (x).

Next, let ε >0 be given. Notice that since ∑ Mn converges, say to M , then there is a real number, N , such that if
n=1

n >N , then
∞ ∣ ∞ ∣ n
∣ ∣
∣ ∣
∑ Mk = ∑ Mk = ∣M − ∑ Mk ∣ < ε (8.3.3)
∣ ∣
k=n+1 ∣k=n+1 ∣ ∣ k=1 ∣

You should be able to use this to show that if n > N , then


n
∣ ∣
∣f (x) − ∑ fk (x)∣ < ε, ∀x ϵ S (8.3.4)
∣ k=1 ∣

 Exercise 8.3.3
Use the ideas above to provide a formal proof of Theorem 8.3.2.

 Exercise 8.3.4
a. Show that the Fourier series
∞ k
(−1)
∑ sin((2k + 1)πx) (8.3.5)
2
(2k + 1)
k=0

converges uniformly on R.
b. Does its differentiated series converge uniformly on R? Explain.

 Exercise 8.3.5

Observe that for all x ∈ [−1, 1]|x| ≤ 1. Identify which of the following series converges pointwise and which converges
uniformly on the interval [−1, 1]. In every case identify the limit function.

a. ∑ (x
n
−x
n−1
)

n=1

∞ n n−1
(x −x )
b. ∑
n
n=1

∞ n n−1
(x −x )
c. ∑
2
n
n=1

Using the Weierstrass-M test, we can prove the following result.

8.3.2 https://math.libretexts.org/@go/page/7975
 Theorem 8.3.3

Suppose ∑ an x
n
has radius of convergence r (where r could be ∞ as well). Let b be any nonnegative real number with
n=0

b <r . Then ∑ a nx
n
converges uniformly on [−b, b].
n=0

 Exercise 8.3.6

Prove theorem 8.3.3.

Hint

We know that ∑ |a nb
n
| converges. This should be all set for the Weierstrass-M test.
n=0

To finish the story on differentiating and integrating power series, all we need to do is show that the power series, its integrated
series, and its differentiated series all have the same radius of convergence. You might not realize it, but we already know that the
integrated series has a radius of convergence at least as big as the radius of convergence of the original series. Specifically, suppose

f (x) = ∑

n=0
an x
n
has a radius of convergence r and let |x| < r . We know that ∑ an x
n
converges uniformly on an interval
n=0

x an
containing 0 and x, and so by Corollary 8.2.2, ∫
t=0
f (t)dt = ∑

n=0
(
n+1
x
n+1
) . In other words, the integrated series converges
for any x with |x| < r. This says that the radius of convergence of the integrated series must be at least r.
To show that the radii of convergence are the same, all we need to show is that the radius of convergence of the differentiated series
is at least as big as r as well. Indeed, since the differentiated series of the integrated series is the original, then this would say that
the original series and the integrated series have the same radii of convergence. Putting the differentiated series into the role of the
original series, the original series is now the integrated series and so these would have the same radii of convergence as well. With

this in mind, we want to show that if |x| < r , then ∑ an nx


n−1
converges. The strategy is to mimic what we did in Theorem
n=0

8.3.1 , where we essentially compared our series with a converging geometric series. Only this time we need to start with the
differentiated geometric series.

 Exercise 8.3.7

Show that ∑ nx n−1


converges for |x| < 1.
n=1

Hint
n n+1
x −1
We know that ∑ x k
= . Differentiate both sides and take the limit as n approaches infinity.
x −1
k=0

 Theorem 8.3.4
∞ ∞

Suppose ∑ a nx
n
has a radius of convergence r and let |x| < r.Then ∑ a n nx
n−1
converges.
n=0 n=1

 Exercise 8.3.8

Prove theorem 8.3.4.

Hint

8.3.3 https://math.libretexts.org/@go/page/7975
n−1
∣ ∣
Let b be a number with |x| < b < r and consider ∣
n−1
∣an nx ∣
n
∣ = ∣an b ⋅
1

b
⋅ n(
x

b
) ∣ . You should be able to use the
∣ ∣

Comparison Test and Exercise 8.3.7.

This page titled 8.3: Radius of Convergence of a Power Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

8.3.4 https://math.libretexts.org/@go/page/7975
8.4: Boundary Issues and Abel’s Theorem
 Learning Objectives
Explain the Abel's theorem

Summarizing our results, we see that any power series ∑ a x has a radius of convergence r such that ∑ a x converges
n
n
n
n

absolutely when |x| < r and diverges when |x| > r. Furthermore, the convergence is uniform on any closed interval
[−b, b] ⊂ (−r, r) which tells us that whatever the power series converges to must be a continuous function on (−r, r). Lastly, if

n
f (x) = ∑ an x (8.4.1)

n=0

for x ∈ (−r, r) , then


′ n−1
f (x) = ∑ an nx (8.4.2)

n=0

for x ∈ (−r, r) and


x ∞ n+1
x
∫ f (t)dt = ∑ an (8.4.3)
t=0
n+1
n=0

for x ∈ (−r, r) .
Thus power series are very well behaved within their interval of convergence, and our cavalier approach from Chapter 2 is
justified, EXCEPT for one issue. If you go back to Exercise Q1 of Chapter 2, you see that we used the geometric series to obtain
the series,

1
n 2n+1
arctan x = ∑(−1 ) x . (8.4.4)
2n + 1
n=0

We substituted x = 1 into this to obtain = ∑ (−1)


π

4

n=0
. Unfortunately, our integration was only guaranteed on a closed
n 1

2n+1

subinterval of the interval (−1, 1) where the convergence was uniform and we substituted in x = 1 . We “danced on the boundary”
in other places as well, including when we said that
n−1 1
1 ∞ n
π − −−− − ∏ ( − j) (−1)
j=0 2
2
=∫ √ 1 − x dx = 1 + ∑ ( )( ) (8.4.5)
4 x=0 n! 2n + 1
n=1

The fact is that for a power series ∑ a x with radius of convergence r, we know what happens for x with |x| < r and x with
n
n

|x| > r. We never talked about what happens for x with |x| = r. That is because there is no systematic approach to this boundary

problem. For example, consider the three series


∞ ∞ n+1 ∞ n+2
x x
n
∑x , ∑ , ∑ (8.4.6)
n+1 (n + 1)(n + 2)
n=0 n=0 n=0

They are all related in that we started with the geometric series and integrated twice, thus they all have radius of convergence equal
to 1. Their behavior on the boundary, i.e., when x = ±1 , is another story. The first series diverges when x = ±1 , the third series
converges when x = ±1 . The second series converges when x = −1 and diverges when x = 1 .
Even with the unpredictability of a power series at the endpoints of its interval of convergence, the Weierstrass-M test does give us
some hope of uniform convergence.

 Exercise 8.4.1: Weierstrass-M test

Suppose the power series ∑ a nx


n
has radius of convergence r and the series ∑ an r
n
converges absolutely. Then ∑ an x
n

converges uniformly on [−r, r].

8.4.1 https://math.libretexts.org/@go/page/7976
Hint
For |x| ≤ r , |a n
nx | ≤ | an r |
n
.

Unfortunately, this result doesn’t apply to the integrals we mentioned as the convergence at the endpoints is not absolute.
Nonetheless, the integrations we performed in Chapter 2 are still legitimate. This is due to the following theorem by Abel which
extends uniform convergence to the endpoints of the interval of convergence even if the convergence at an endpoint is only
conditional. Abel did not use the term uniform convergence, as it hadn’t been defined yet, but the ideas involved are his.

 Theorem 8.4.1: Abel’s Theorem

Suppose the power series ∑ an x


n
has radius of convergence r and the series ∑ an r
n
converges. Then ∑ an x
n
converges
uniformly on [0, r].

The proof of this is not intuitive, but involves a clever technique known as Abel’s Partial Summation Formula.

 Lemma 8.4.1: Abel’s Partial Summation Formula

Let a 1, a2 , . . . , an , b1 , b2 , . . . , bn be real numbers and let A m =∑


m

k=1
ak . Then
n−1

a1 b1 + a2 b2 + ⋅ ⋅ ⋅ + an bn = ∑ Aj (bj − bj+1 ) + An bn (8.4.7)

j=1

 Exercise 8.4.2

Prove Lemma 8.4.1.

Hint
For j > 1 , a j = Aj − Aj − 1 .

 Lemma 8.4.2: Abel’s Lemma

Let a , a , . . . , a , b
1 2 n 1, be real numbers with
b2 , . . . , bn b1 ≥ b2 ≥. . . ≥ bn ≥ 0 and let Am = ∑
m

k=1
ak . Suppose | Am | ≤ B
n
for all m. Then ∣∣∑ j=1
aj bj ∣ ≤ B⋅b .
∣ 1

 Exercise 8.4.3

Prove Lemma 8.4.2.

 Exercise 8.4.4
Prove Theorem 8.4.1.

Hint
Let ϵ>0 . Since ∑

n=0
an rconverges then by the Cauchy Criterion, there exists
n
N such that if m >n >N then

∣∑
m

k=n+1
ak r ∣
∣ <
k ϵ

2
. Let 0 ≤ x ≤ r . By Lemma 8.4.2,

∣ m ∣ ∣ m
k∣ n+1
x ϵ x ϵ
∣ k∣ ∣ k ∣
∑ ak x = ∑ ak r ( ) ≤ ( ) ≤
∣ ∣ ∣ ∣
r 2 r 2
∣k=n+1 ∣ ∣k=n+1 ∣

Thus for 0 ≤ x ≤ r , n > N ,


∣ ∞ ∣ ∣ m ∣
ϵ
∣ k∣ ∣ k∣
∑ ak x = lim ∑ ak x ≤ <ϵ
∣ ∣ ∣ ∣
n→∞ 2
∣k=n+1 ∣ ∣k=n+1 ∣

8.4.2 https://math.libretexts.org/@go/page/7976
 Corollary 8.4.1

Suppose the power series ∑ a nx


n
has radius of convergence r and the series ∑ a n (−r)
n
converges. Then ∑ a nx
n
converges
uniformly on [−r, 0].

 Exercise 8.4.5

Prove Corollary 8.4.1.

Hint
Consider ∑ a n (−x )
n
.

This page titled 8.4: Boundary Issues and Abel’s Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated
by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

8.4.3 https://math.libretexts.org/@go/page/7976
CHAPTER OVERVIEW

9: Back to the Real Numbers


9.1: Trigonometric Series
9.2: Infinite Sets
9.3: Cantor’s Theorem and Its Consequences

Thumbnail: Georg Cantor, German mathematician and philosopher of mixed Jewish-Danish-Russian heritage, the creator of set
theory. (public domain).

This page titled 9: Back to the Real Numbers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene
Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

1
9.1: Trigonometric Series
 Learning Objectives
Explain the Trigonometric series

As we have seen, when they converge, power series are very well behaved and Fourier (trigonometric) series are not necessarily.
The fact that trigonometric series were so interesting made them a lightning rod for mathematical study in the late nineteenth
century.
For example, consider the question of uniqueness. We saw in Chapter 5 that if a function could be represented by a power series,
then that series must be the Taylor series. More precisely, if
∞ (n)
f (a)
n
f (x) = ∑ an (x − a) , then an = (9.1.1)
n!
n=0

But what can be said about the uniqueness of a trigonometric series? If we can represent a function f as a general trigonometric
series

f (x) = ∑(an cos nπx + bn sin nπx) (9.1.2)

n=0

then must this be the Fourier series with the coefficients as determined by Fourier?
For example, if ∑ (a cos nπx + b sin nπx) converges to f uniformly on the interval (0, 1), then because of the uniform

n=0 n n

convergence, Fourier’s term-by-term integration which we saw in earlier is perfectly legitimate and the coefficients are, of
necessity, the coefficients he computed. However we have seen that the convergence of a Fourier series need not be uniform. This
does not mean that we cannot integrate term-by-term, but it does say that we can’t be sure that term-by-term integration of a
Fourier series will yield the integral of the associated function.
This led to a generalization of the integral by Henri Lebesgue in 1905. Lebesgue’s profound work settled the issue of whether or
not a bounded pointwise converging trigonometric series is the Fourier series of a function, but we will not go in this direction. We
will instead focus on work that Georg Cantor did in the years just prior. Cantor’s work was also profound and had far reaching
implications in modern mathematics. It also leads to some very weird conclusions.1
To begin, let’s suppress the underlying function and suppose we have
∞ ∞

′ ′
∑(an cos nπx + bn sin nπx) = ∑(an cos nπx + bn sin nπx) (9.1.3)

n=0 n=0

We ask: If these two series are equal must it be true that ′


an = an and bn = bn

? We can reformulate this uniqueness question as
follows: Suppose

′ ′
∑((an − an ) cos nπx + (bn − bn ) sin nπx) = 0 (9.1.4)

n=0

If we let cn = an − an and d = b = b , then the question becomes: If ∑ (c cos nπx + d sin nπx) = 0 , then will

n n

n

n=0 n n

c = d = 0 ? It certainly seems reasonable to suppose so, but at this point we have enough experience with infinite sums to know
n n

that we need to be very careful about relying on the intuition we have honed on finite sums.
The answer to this seemingly basic question leads to some very profound results. In particular, answering this question led the
mathematician Georg Cantor (1845-1918) to study the makeup of the real number system. This in turn opened the door to the
modern view of mathematics of the twentieth century. In particular, Cantor proved the following result in 1871.

 Theorem 9.1.1: Cantor


If the trigonometric series

9.1.1 https://math.libretexts.org/@go/page/7965

∑(cn cos nπx + dn sin nπx) = 0 (9.1.5)

n=0

“with the exception of certain values of x,” then all of its coefficients vanish.

In his attempts to nail down precisely which “certain values” could be exceptional, Cantor was led to examine the nature of subsets
of real numbers and ultimately to give a precise definition of the concept of infinite sets and to define an arithmetic of “infinite
numbers.” (Actually, he called them transfinite numbers because, by definition, numbers are finite.)
As a first step toward identifying those “certain values,” Cantor proved the following theorem, which we will state but not prove.

 Theorem 9.1.2: Cantor, 1870

If the trigonometric series


∑(cn cos nπx + dn sin nπx) = 0 (9.1.6)

n=0

for all x ∈ R then all of its coefficients vanish.

He then extended this to the following:

 Theorem 9.1.3: Cantor, 1871

If the trigonometric series


∑(cn cos nπx + dn sin nπx) = 0 (9.1.7)

n=0

for all but finitely many x ∈ R then all of its coefficients vanish.

Observe that this is not a trivial generalization. Although the exceptional points are constrained to be finite in number, this number
100000

could still be extraordinarily large. That is, even if the series given above differed from zero on 10 distinct points in the
10

−100000 100000

interval (0, 10 10
) the coefficients still vanish. This remains true even if at each of these 10
10
points the series converges to
100000

10
10
. This is truly remarkable when you think of it this way.

Figure 9.1.1 : Georg Cantor


At this point Cantor became more interested in these exceptional points than in the Fourier series problem that he’d started with.
The next task he set for himself was to see just how general the set of exceptional points could be. Following Cantor’s lead we
make the following definitions.

 Definition: 9.1.1
Let S ⊆ R and let a be a real number. We say that a is a limit point (or an accumulation point) of S if there is a sequence (a ) n

with a ∈ S − a which converges to a .


n

9.1.2 https://math.libretexts.org/@go/page/7965
 Exercise 9.1.1

Let S ⊆ R and let a be a real number. Prove that a is a limit point of S if and only if for every ε > 0 the intersection

(a − ε, a + ε) ∩ S − a ≠ ∅ (9.1.8)

The following definition gets to the heart of the matter.

 Definition: 9.1.2

Let S ⊆ R . The set of all limit points of S is called the derived set of S . The derived set is denoted S .

Don’t confuse the derived set of a set with the derivative of a function. They are completely different objects despite the similarity
of both the language and notation. The only thing that they have in common is that they were somehow “derived” from something
else.

 Exercise 9.1.2

Determine the derived set, S , of each of the following sets.


a. S = { , , , ⋯}
1

1
1

2
1

b. S = {0, , , , ⋯}
1

1
1

2
1

c. S = (0, 1]
d. S = [0, 1/2) ∪ (1/2, 1]
e. S = Q
f. S = R − Q
g. S = Z
h. Any finite set S .

 Exercise 9.1.3

Let S ⊆ R .
a. Prove that (S ) ⊆ S .
′ ′ ′

b. Give an example where these two sets are equal.


c. Give an example where these two sets are not equal.

The notion of the derived set forms the foundation of Cantor’s exceptional set of values. Specifically, let S again be a set of real
numbers and consider the following sequence of sets:
′ ′ ′ ′ ′ ′
S ⊇ (S ) ⊇ ((S ) ) ⊇ ⋯ (9.1.9)

Cantor showed that if, at some point, one of these derived sets is empty, then the uniqueness property still holds. Specifically, we
have:

 Theorem 9.1.4: Cantor, 1871

Let S be a subset of the real numbers with the property that one of its derived sets is empty. Then if the trigonometric series
(c cos nπx + d sin nπx) is zero for all x ∈ R − S , then all of the coefficients of the series vanish.

∑ n n
n=0

References
1
’Weird’ does not mean false. It simply means that some of Cantor’s results can be hard to accept, even after you have seen the
proof and verified its validity.

This page titled 9.1: Trigonometric Series is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene
Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit

9.1.3 https://math.libretexts.org/@go/page/7965
history is available upon request.

9.1.4 https://math.libretexts.org/@go/page/7965
9.2: Infinite Sets
 Learning Objectives
Explain Infinite sets

The following theorem follows directly from our previous work with the NIP and will be very handy later. It basically says that a
sequence of nested closed intervals will still have a non-empty intersection even if their lengths do not converge to zero as in the
NIP.

 Theorem 9.2.1

Let ([a n, bn ])n=1 be a sequence of nested intervals such that lim n→∞ | bn − an | > 0 . Then there is at least one c ∈ R such that
c ∈ [ an , bn ] for all n ∈ N .

Proof
By Corollary 7.4.1 of Chapter 7, we know that a bounded increasing sequence such as (a ) converges, say to c. Since n

a ≤a
n ≤b m for m > n and lim
n a = c , then for any fixed n , a ≤ c ≤ b
m→∞ m . This says c ∈ [a , b ] for all n ∈ N .
n n n n

 Exercise 9.2.1
Suppose lim n→∞ | bn − an | > 0 . Show that there are at least two points, c and d , such that c ∈ [a n, bn ] and d ∈ [a
n, bn ] for all
n ∈ N .

Our next theorem says that in a certain, very technical sense there are more real numbers than there are counting numbers 2. This
probably does not seem terribly significant. After all, there are real numbers which are not counting numbers. What will make this
so startling is that the same cannot be said about all sets which strictly contain the counting numbers. We will get into the details of
this after the theorem is proved.

 Theorem 9.2.2: Cantor, 1874


Let S = (s ∞
n ) n=1 be a sequence of real numbers. There is a real number c , which is not in 1 S .

Proof
For the sake of obtaining a contradiction assume that the sequence S contains every real number; that is, S = R . As usual

we will build a sequence of nested intervals ([x , y ]) . i i i=1

Let x be the smaller of the first two distinct elements of S , let y be the larger and take [x
1 1 1, y1 ] to be the first interval.
Next we assume that [x n−1 , yn−1 ]has been constructed and build [x , y ] as follows. Observe that there are infinitely many
n n

elements of S in (x , y n−1 n−1 ) since S = R . Let s and s be the first two distinct elements of S such that
m k

sm , sk ϵ(xn−1 , yn−1 ) (9.2.1)

Take x to be the smaller and y to be the larger of s and s . Then [x


n n m k n, yn ] is the n th interval.
From the way we constructed them it is clear that
[ x1 , y1 ] ⊇ [ x2 , y2 ] ⊇ [ x3 , y3 ] ⊇. . . . (9.2.2)

Therefore by Theorem 9.2.1there is a real number, say c, such that


c ∈ [ xn , yn ] for all n ∈ N (9.2.3)

In fact, since x 1 < x2 < x3 . . . < y3 < y2 < y1 it is clear that

xn < c < yn , ∀n (9.2.4)

9.2.1 https://math.libretexts.org/@go/page/9108
We will show that c is the number we seek. That the inequalities in the above formula 9.2.4 are strict will play a crucial
role.
To see that c /
ϵ S we suppose that c ∈ S and derive a contradiction.

So, suppose that c = s for some p ∈ N . Then only {s , s , . . . , s } appear before s in the sequence S . Since each x
p 1 2 p−1 p n

is taken from S it follows that only finitely many elements of the sequence (x ) appear before s = c in the sequence as
n p

well.
Let x be the last element of (x ) which appears before c = s in the sequence and consider x . The way it was
l n p l+1

constructed, x was one of the first two distinct terms in the sequence S strictly between x and y , the other being y .
l+1 l l l+1

Since x does not appear before c = s in the sequence and x < c < y , it follows that either c = x
l+1 p l l or c = y . l+1 l+1

However, this gives us a contradiction as we know from equation 9.2.4that x < c < y l+1. l+1

Thus c is not an element of S .

So how does this theorem show that there are “more” real numbers than counting numbers? Before we address that question we
need to be very careful about the meaning of the word ’more’ when we’re talking about infinite sets.
First let’s consider two finite sets, say A = α, β, γ, δ and B = a, b, c, d, e . How do we know that B is the bigger set? (It obviously
is.) Clearly we can just count the number of elements in both A and B . Since |A| = 4 and |B| = 5 and 4 < 5 . B is clearly bigger.
But we’re looking for a way to determine the relative size of two sets without counting them because we have no way of counting
the number of elements of an infinite set. Indeed, it isn’t even clear what the phrase “the number of elements” might mean when
applied to the elements of an infinite set.
When we count the number of elements in a finite set what we’re really doing is matching up the elements of the set with a set of
consecutive positive integers, starting at 1. Thus since

1 ↔ α (9.2.5)

2 ↔ β

3 ↔ γ

4 ↔ δ

we see that |A| = 4 . Moreover, the order of the match-up is unimportant. Thus since

2 ↔ e (9.2.6)

3 ↔ a

5 ↔ b

4 ↔ d

1 ↔ c

it is clear that the elements of B and the set {1, 2, 3, 4, 5}can be matched up as well. And it doesn’t matter what order either set is
in. They both have 5 elements.
Such a match-up is called a one-to-one correspondence. In general, if two sets can be put in one-to-one correspondence then they
are the same “size.” Of course the word “size” has lots of connotations that will begin to get in the way when we talk about infinite
sets, so instead we will say that the two sets have the same cardinality. Speaking loosely, this just means that they are the same size.
More precisely, if a given set S can be put in one-to-one correspondence with a finite set of consecutive integers, say
{1, 2, 3, . . . , N }, then we say that the cardinality of the set is N . But this just means that both sets have the same cardinality. It is

this notion of one-to-one correspondence, along with the next two definitions, which will allow us to compare the sizes
(cardinalities) of infinite sets.

 Definition 9.2.1

Any set which can be put into one-to-one correspondence with N = {1, 2, 3, . . . } is called a countably infinite set. Any set
which is either finite or countably infinite is said to be countable.

Since N is an infinite set, we have no symbol to designate its cardinality so we have to invent one. The symbol used by Cantor and
adopted by mathematicians ever since is ℵ .3 Thus the cardinality of any countably infinite set is ℵ .
0 0

9.2.2 https://math.libretexts.org/@go/page/9108
We have already given the following definition informally. We include it formally here for later reference.

 Definition: 9.2.2

If two sets can be put into one-to-one correspondence then they are said to have the same cardinality.

With these two definitions in place we can see that Theorem 9.2.2 is nothing less than the statement that the real numbers are not
countably infinite. Since it is certainly not finite, then we say that the set of real numbers is uncountable and therefore “bigger”
than the natural numbers!

To see this let us suppose first that each real number appears in the sequence (s ) exactly once. In that case the indexing of our
n n=1

sequence is really just a one-to-one correspondence between the elements of the sequence and N:
1 ↔ s1 (9.2.7)

2 ↔ s2

3 ↔ s3

4 ↔ s4

If some real numbers are repeated in our sequence then all of the real numbers are a subset of our sequence and will therefore also
be countable.
In either case, every sequence is countable. But our theorem says that no sequence in R includes all of R . Therefore R is
uncountable.
Most of the sets you have encountered so far in your life have been countable.

 Exercise 9.2.2

Show that each of the following sets is countable.


a. {2, 3, 4, 5, . . . } = {n} ∞

n=2

b. {0, 1, 2, 3, . . . } = {n} ∞

n=0

c. {1, 4, 9, 16, . . . , n , . . . } = {n
2 2
}
n=1

d. The set of prime numbers.


e. Z

In fact, if we start with a countable set it is rather difficult to use it to build anything but another countable set.

 Exercise 9.2.3

Let {A i} be a collection of countable sets. Show that each of the following sets is also countable:
a. Any subset of A . 1

b. A ∪ A
1 2

c. A ∪ A ∪ A
1 2 3
n
d. ⋃ A
i=1 i

e. ⋃ A
i=1 i

It seems that no matter what we do the only example of an uncountably infinite set is R. But wait! Remember the rational
numbers? They were similar to the real numbers in many ways. Perhaps they are uncountably infinite too?
Alas, no. The rational numbers turn out to be countable too.

 Theorem 9.2.3

Show that Q is countable.

Sketch of Proof

9.2.3 https://math.libretexts.org/@go/page/9108
First explain how you know that all of the non-negative rational numbers are in this list:
0 0 1 0 1 2 0 1 2 3
, , , , , , , , , ,⋯ (9.2.8)
1 2 1 3 2 1 4 3 2 1

However there is clearly some duplication. To handle this, apply part (a) of Exercise 9.2.3. Does this complete the proof or
is there more to do?

 Exercise 9.2.4
Prove Theorem 9.2.1

The following corollary says that the cardinality of the real numbers is much larger than the cardinality of the rational numbers,
despite the fact that both are infinite.
That is, as a subset of the reals, the rationals can be contained in a sequence of intervals, the sum of whose lengths can be arbitrarily
small. In a sense this says that a countably infinite set is so small (on the transfinite scale) that it is “almost” finite.
Usually we express this idea with the statement, “Q is a set of measure zero in R.” The term “measure” has a precise meaning
which we will not pursue. The following corollary contains the essence of the idea.

 Corollary 9.2.1
Let ε > 0 be given. There is a collection of intervals in R, I n = [ an , bn ] such that

Q ⊂ ⋃ In (9.2.9)

n=1

and

∑(bn − an ) < ε (9.2.10)

n=1

 Exercise 9.2.5

Prove Corollary 9.2.1.

Hint
If we had only finitely many rationals to deal with this would be easy. Let {r , r 1 2, . . . , rk } be these rational numbers and
take a = r −
n n
ε

2k
and b = r + . Then for all n = 1, . . . , kr ∈ [a , b ] and
n n
ε

2k
n n n

k k
ε
∑ bn − an = ∑ =ε (9.2.11)
k
n=1 n=1

The difficulty is, how do we move from the finite to the infinite case?

Notice how this idea hearkens back to the discussion of Leibniz’s approach to the Product Rule. He simply tossed aside the
expression dxdy because it was ‘infinitely small’ compared to either xdy or ydx. Although this isn’t quite the same thing we are
discussing here it is similar and it is clear that Leibniz’s insight and intuition were extremely acute. They were moving him in the
right direction, at least.
All of our efforts to build an uncountable set from a countable one have come to nothing. In fact many sets that at first “feel” like
they should be uncountable are in fact countable. This makes the uncountability of R all the more remarkable.
However if we start with an uncountable set it is relatively easy to build others from it.

9.2.4 https://math.libretexts.org/@go/page/9108
 Exercise 9.2.6

a. Let \((a,b)\) and \((c,d)\0 be two open intervals of real numbers. Show that these two sets
have the same cardinality by constructing a one-to-one onto function between them.
Hint

A linear function should do the trick.

b. Show that any open interval of real numbers has the same cardinality as R .
Hint

Consider the interval (−π/2, π/2) .

c. Show that (0, 1] and (0, 1) have the same cardinality.

Hint

Note that {1, 1/2, 1/3, . . . } and {1/2, 1/3, . . . } have the same cardinality.

d. Show that [0, 1] and (0, 1) have the same cardinality.

References
1
To streamline things, we are abusing notation here as we are letting S denote both the sequence (which is ordered) and the
underlying (unordered) set of entries in the sequence.
2
ℵ0 is the first letter of the Hebrew alphabet and is pronounced ”aleph.” ℵ is pronounced ”aleph null.”
0

This page titled 9.2: Infinite Sets is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene Boman and
Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is
available upon request.

9.2.5 https://math.libretexts.org/@go/page/9108
9.3: Cantor’s Theorem and Its Consequences
 Learning Objectives
Explain Cantor's theorem

Once Cantor showed that there were two types of infinity (countable and uncountable), the following question was natural, “Do all
uncountable sets have the same cardinality?”
Just like not all “non-dogs” are cats, there is, off-hand, no reason to believe that all uncountable sets should be the same size.
However constructing uncountable sets of different sizes is not as easy as it sounds.
For example, what about the line segment represented by the interval [0, 1] and the square represented by the set
[0, 1] × [0, 1] = {(x, y)|0 ≤ x, y ≤ 1}. Certainly the two dimensional square must be a larger infinite set than the one dimensional

line segment. Remarkably, Cantor showed that these two sets were the same cardinality. In his 1877 correspondence of this result to
his friend and fellow mathematician, Richard Dedekind, even Cantor remarked, “I see it, but I don’t believe it!”

Figure 9.3.1 : Richard Dedekind


The following gives the original idea of Cantor’s proof. Cantor devised the following function f : [0, 1] × [0, 1] → [0, 1]. First, we
represent the coordinates of any point (x, y) ∈ [0, 1] × [0, 1] by their decimal representations x = 0.a a a . . . and 1 2 3

y = 0. b b b . . . . Even terminating decimals can be written this way as we could write 0.5 = 0.5000.... We can then define
1 2 3

f (x, y) by

f ((0. a1 a2 a3 . . . , 0. b1 b2 b3 . . . )) = 0. a1 b1 a2 b2 a3 b3 . . . . (9.3.1)

This relatively simple idea has some technical difficulties in it related to the following result.

 Exercise 9.3.1

Consider the sequence (0.9, 0.99, 0.999, . . . .) Determine that this sequence converges and, in fact, it converges to 1. This
suggests that 0.999... = 1.

Similarly, we have 0.04999... = 0.05000... , etc. To make the decimal representation of a real number in [0, 1] unique, we must
make a consistent choice of writing a terminating decimal as one that ends in an infinite string of zeros or an infinite string of nines
[with the one exception 0 = 0.000... ]. No matter which choice we make, we could never make this function onto. For example,
109/1100 = 0.09909090... would have as its pre-image (0.0999..., 0.9000...)which would be a mix of the two conventions.
Cantor was able to overcome this technicality to demonstrate a one to one correspondence, but instead we will note that in either
convention, the function is one-to-one, so this says that the set [0, 1] × [0, 1] is the same cardinality as some (uncountable) subset
of R. The fact that this has the same cardinality as R is something we will come back to. But first we’ll try construct an
uncountable set which does not have the same cardinality as R. To address this issue, Cantor proved the following in 1891.

9.3.1 https://math.libretexts.org/@go/page/7966
 Theorem 9.3.1: Cantor’s Theorem

Let S be any set. Then there is no one-to-one correspondence between S and P (S), the set of all subsets of S .

Since S can be put into one-to-one correspondence with a subset of P (S)(a → {a}) , then this says that P (S) is at least as large as
S . In the finite case |P (S)| is strictly greater than |S| as the following problem shows. It also demonstrates why P (S) is called the

power set of S .

 Exercise 9.3.2

Prove: If |S| = n , then |P (S)| = 2n

Hint
Let S = a , a , . . . , a . Consider the following correspondence between the elements of
1 2 n P (S) and the set T of all n -
tuples of yes (Y ) or no (N ):
{} ↔ N , N , N , . . . , N (9.3.2)

{ a1 } ↔ {Y , N , N , . . . , N }

{ a2 } ↔ {N , Y , N , . . . , N }

S ↔ {Y , Y , Y , . . . , Y }

How many elements are in T ?

 Exercise 9.3.3

Prove Cantor’s Theorem.

Hint
Assume for contradiction, that there is a one-to-one correspondence f : S → P (S) . Consider A = {x ∈ S|x /
∈f (x)} .
Since f is onto, then there is a ∈ A such that A = f (a) . Is a ∈ A or is a /
∈A ?

Actually it turns out that R and P (N) have the same cardinality. This can be seen in a roundabout way using some of the above
ideas from Exercise 9.3.2. Specifically, let T be the set of all sequences of zeros or ones (you can use Y s or N s, if you prefer).
Then it is straightforward to see that T and P (N) have the same cardinality.
If we consider (0, 1], which has the same cardinality as R, then we can see that this has the same cardinality as T as well.
aj
Specifically, if we think of the numbers in binary, then every real number in [0, 1] can be written as ∑ = (a , a , ⋯) where

j=1 j 1 2
2

. We have to account for the fact that binary representations such as 0.0111... and 0.1000... represent the same real
aj ∈ {0, 1}

number (say that no representations will end in an infinite string of zeros), then we can see that [0, 1] has the same cardinality as
T − U , where U is the set of all sequences ending in an infinite string of zeros. It turns out that U itself is a countable set.

 Exercise 9.3.4

Let U = {(a , a , a , . . . )|a ∈ {0, 1} and


n 1 2 3 j an+1 = an+2 = ⋅ ⋅ ⋅ = 0} . Show that for each n , Un is finite and use this to
conclude that U is countably infinite.

The following two problems show that deleting a countable set from an uncountable set does not change its cardinality.

 Exercise 9.3.5

Let S be an infinite set. Prove that S contains a countably infinite subset.

9.3.2 https://math.libretexts.org/@go/page/7966
 Exercise 9.3.6

Suppose X is an uncountable set and Y ⊂X is countably infinite. Prove that X and X − Y have the same cardinality.

Hint
Let Y . If X − Y is an infinite set, then by the previous problem it contains a countably infinite set Y . Likewise if
= Y0 0 1

X − (Y0 ∪ Y1 ) is infinite it also contains an infinite set Y . Again, if X − (Y ∪ Y ∪ Y ) is an infinite set then it contains
2 0 1 2

an infinite set Y , etc. For n = 1, 2, 3, . . . , let f : Y


3 n n−1→ Y n be a one-to-one correspondence and define
f : X → X −Y by
f (x) = fn (x), if xϵYn , n = 0, 1, 2, ⋯
{ ∞ (9.3.3)
f (x) = x, if xϵX − (⋃ Yn )
n=0

Show that f is one-to-one and onto.

The above problems say that R , T − U , T , and P (N ) all have the same cardinality.
As was indicated before, Cantor’s work on infinite sets had a profound impact on mathematics in the beginning of the twentieth
century. For example, in examining the proof of Cantor’s Theorem, the eminent logician Bertrand Russell devised his famous
paradox in 1901. Before this time, a set was naively thought of as just a collection of objects. Through the work of Cantor and
others, sets were becoming a central object of study in mathematics as many mathematical concepts were being reformulated in
terms of sets. The idea was that set theory was to be a unifying theme of mathematics. This paradox set the mathematical world on
its ear.

Russell’s Paradox
Consider the set of all sets which are not elements of themselves. We call this set D and ask, “Is D ∈ D ?” Symbolically, this set is

D = {S|S /
ϵS} (9.3.4)

If D ∈ D , then by definition, D /
∈D . If D 6∈ D, then by definition, D ∈ D .

If you look back at the proof of Cantor’s Theorem, this was basically the idea that gave us the contradiction. To have such a
contradiction occurring at the most basic level of mathematics was scandalous. It forced a number of mathematicians and logicians
to carefully devise the axioms by which sets could be constructed. To be honest, most mathematicians still approach set theory
from a naive point of view as the sets we typically deal with fall under the category of what we would call “normal sets.” In fact,
such an approach is officially called Naive Set Theory (as opposed to Axiomatic Set Theory). However, attempts to put set theory
and logic on solid footing led to the modern study of symbolic logic and ultimately the design of computer (machine) logic.
Another place where Cantor’s work had a profound inuence in modern logic comes from something we alluded to before. We
showed before that the unit square [0, 1] × [0, 1] had the same cardinality as an uncountable subset of R. In fact, Cantor showed
that the unit square had the same cardinality as R itself and was moved to advance the following in 1878.

 Conjecture (The Continuum Hypothesis)

Every uncountable subset of R has the same cardinality as R.

Cantor was unable to prove or disprove this conjecture (along with every other mathematician). In fact, proving or disproving this
conjecture, which was dubbed the Continuum Hypothesis, was one of Hilbert’s famous 23 problems presented as a challenge to
mathematicians at the International Congress of Mathematicians in 1900.
Since R has the same cardinality as P (N ), then the Continuum Hypothesis was generalized to the:

 Conjecture (The Generalized Continuum Hypothesis)

Given an infinite set S , there is no infinite set which has a cardinality strictly between that of S

and its power set P (S).

9.3.3 https://math.libretexts.org/@go/page/7966
Efforts to prove or disprove this were in vain and with good reason. In 1940, the logician Kurt Gödel showed that the Continuum
Hypothesis could not be disproved from the Zermelo-Fraenkel Axioms of set theory 1. In 1963, Paul Cohen showed that the
Continuum Hypothesis could not be proved using the Zermelo-Fraenkel Axioms. In other words, the Zermelo-Fraenkel Axioms do
not contain enough information to decide the truth of the hypothesis.
We are willing to bet that at this point your head might be swimming a bit with uncertainty. If so, then know that these are the same
feelings that the mathematical community experienced in the mid twentieth century. In the past, mathematics was seen as a model
of logical certainty. It is disconcerting to find that there are statements that are “undecidable.” In fact, Gödel proved in 1931 that a
consistent finite axiom system that contained the axioms of arithmetic would always contain undecidable statements which could
neither be proved true nor false with those axioms. Mathematical knowledge would always be incomplete.

Figure 9.3.2 : Kurt Gödel


So by trying to put the foundations of calculus on solid ground, we have come to a point where we can never obtain mathematical
certainty. Does this mean that we should throw up our hands and concede defeat? Should we be paralyzed with fear of trying
anything? Certainly not! As we mentioned before, most mathematicians do well by taking a pragmatic approach: using their
mathematics to solve problems that they encounter. In fact, it is typically the problems that motivate the mathematics. It is true that
mathematicians take chances that don’t always pan out, but they still take these chances, often with success. Even when the
successes lead to more questions, as they typically do, tackling those questions usually leads to a deeper understanding. At the very
least, our incomplete understanding means we will always have more questions to answer, more problems to solve.
What else could a mathematician ask for?

References
1
One of the formal axiomatic approaches to set theory established by Ernst Zermelo in 1908 and revised by Abraham Fraenkel in
1921.

This page titled 9.3: Cantor’s Theorem and Its Consequences is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or
curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts
platform; a detailed edit history is available upon request.

9.3.4 https://math.libretexts.org/@go/page/7966
CHAPTER OVERVIEW

10: Epilogue to Real Analysis


10.1: On the Nature of Numbers
10.2: Building the Real Numbers

Thumbnail: Real number line with some constants such as π . (Public Domain; User:Phrood).

This page titled 10: Epilogue to Real Analysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene
Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit
history is available upon request.

1
10.1: On the Nature of Numbers
 Learning Objectives

On the Nature of Numbers: A Dialogue (with Apologies to Galileo)

Interlocuters: Salviati, Sagredo, and Simplicio; Three Friends of Galileo Galilei


Setting: Three friends meet in a garden for lunch in Renassaince Italy. Prior to their meal they discuss the book How We Got From
There to Here: A Story of Real Analysis. How they obtained a copy is not clear.
Salviati: My good sirs. I have read this very strange volume as I hope you have?
Sagredo: I have and I also found it very strange.
Simplicio: Very strange indeed; at once silly and mystifying.
Salviati: Silly? How so?
Simplicio: These authors begin their tome with the question, “What is a number?” This is an unusually silly question, don’t you
think? Numbers are numbers. Everyone knows what they are.
Sagredo: I thought so as well until I reached the last chapter. But now I am not so certain. What about this quantity ℵ ? If this
0

counts the positive integers, isn’t it a number? If not, then how can it count anything? If so, then what number is it? These
questions plague me ‘til I scarcely believe I know anything anymore.
Simplicio: Of course ℵ is not a number! It is simply a new name for the infinite, and infinity is not a number.
0

Sagredo: But isn’t ℵ0 the cardinality of the set of natural numbers, N, in just the same way that the cardinality of the set
S = {Salviati, Sagredo, Simplicio} is 3? If 3 is a number, then why isn’t ℵ ?
0

Simplicio: Ah, my friend, like our authors you are simply playing with words. You count the elements in the set
S = {Salviati, Sagredo, Simplicio}; you see plainly that the number of elements it contains is 3 and then you change your

language. Rather than saying that the number of elements in S is 3 you say that the cardinality is 3. But clearly “cardinality” and
“number of elements” mean the same thing.
Similarly you use the symbol N to denote the set of positive integers. With your new word and symbol you make the statement “the
cardinality (number of elements) of N is ℵ .” This statement has the same grammatical form as the statement “the number of
0

elements (cardinality) of S is three.” Since three is a number you conclude that ℵ is also a number.
0

But this is simply nonsense dressed up to sound sensible. If we unwind our notation and language, your statement is simply, “The
number of positive integers is infinite.” This is obviously nonsense because infinity is not a number.
Even if we take infinity as an undefined term and try to define it by your statement this is still nonsense since you are using the
word “number” to define a new “number” called infinity. This definition is circular. Thus it is no definition at all. It is nonsense.
Salviati: Your reasoning on this certainly seems sound.
Simplicio: Thank you.
Salviati: However, there are a couple of small points I would like to examine more closely if you will indulge me?
Simplicio: Of course. What troubles you?
Salviati: You’ve said that we cannot use the word “number” to define numbers because this would be circular reasoning. I entirely
agree, but I am not sure this is what our authors are doing.
Consider the set {1, 2, 3}. Do you agree that it contains three elements?
Simplicio: Obviously.
Sagredo: Ah! I see your point! That there are three elements does not depend on what those elements are. Any set with three
elements has three elements regardless of the nature of the elements. Thus saying that the set {1, 2, 3} contains three elements does
not define the word “number” in a circular manner because it is irrelevant that the number 3 is one of the elements of the set. Thus

10.1.1 https://math.libretexts.org/@go/page/7980
to say that three is the cardinality of the set {1, 2, 3} has the same meaning as saying that there are three elements in the set
{Salviati, Sagredo, Simplicio}.

In both cases the number “3” is the name that we give to the totality of the elements of each set.
Salviati: Precisely. In exactly the same way ℵ is the symbol we use to denote the totality of the set of positive integers.
0

Thus ℵ is a number in the same sense that ’3’ is a number, is it not?


0

Simplicio: I see that we can say in a meaningful way that three is the cardinality of any set with . . . well, . . . with three elements (it
becomes very difficult to talk about these things) but this is simply a tautology! It is a way of saying that a set which has three
elements has three elements!
This means only that we have counted them and we had to stop at three. In order to do this we must have numbers first. Which, of
course, we do. As I said, everyone knows what numbers are.
Sagredo: I must confess, my friend, that I become more confused as we speak. I am no longer certain that I really know what a
number is. Since you seem to have retained your certainty can you clear this up for me? Can you tell me what a number is?
Simplicio: Certainly. A number is what we have just been discussing. It is what you have when you stop counting. For example,
three is the totality (to use your phrase) of the elements of the sets {Salviati, Sagredo, Simplicio} or {1, 2, 3} because when I
count the elements in either set I have to stop at three. Nothing less, nothing more. Thus three is a number.
Salviati: But this definition only confuses me! Surely you will allow that fractions are numbers? What is counted when we end
with, say 4/5 or 1/5?
Simplicio: This is simplicity itself. 4/5 is the number we get when we have divided something into 5 equal pieces and we have
counted four of these fifths. This is four-fifths. You see? Even the language we use naturally bends itself to our purpose.
Salviati: But what of one-fifth? In order to count one fifth we must first divide something into fifths. To do this we must know
what one-fifth is, musn’t we? We seem to be using the word “number” to define itself again. Have we not come full circle and
gotten nowhere?
Simplicio: I confess this had not occurred to me before. But your objection is easily answered. To count one-fifth we simply divide
our “something” into tenths. Then we count two of them. Since two-tenths is the same as one-fifth the problem is solved. Do you
see?
Sagredo: I see your point but it will not suffice at all! It merely replaces the question, “What is one-fifth?” with, “What is one-
tenth?” Nor will it do to say that one-tenth is merely two-twentieths. This simply shifts the question back another level.
Archimedes said, “Give me a place to stand and a lever long enough and I will move the earth.” But of course he never moved the
earth because he had nowhere to stand. We seem to find ourselves in Archimedes’ predicament: We have no place to stand.
Simplicio: I confess I don’t see a way to answer this right now. However I’m sure an answer can be found if we only think hard
enough. In the meantime I cannot accept that ℵ is a number. It is, as I said before, infinity and infinity is not a number! We may as
0

well believe in fairies and leprechauns if we call infinity a number.


Sagredo: But again we’ve come full circle. We cannot say definitively that ℵ is or is not a number until we can state with
n

confidence what a number is. And even if we could find solid ground on which to solve the problem of fractions, what of √2? Or
π? Certainly these are numbers but I see no way to count to either of them.

Simplicio: Alas! I am beset by demons! I am bewitched! I no longer believe what I know to be true!
Salviati: Perhaps things are not quite as bad as that. Let us consider further. You said earlier that we all know what numbers are,
and I agree. But perhaps your statement needs to be more precisely formulated. Suppose we say instead that we all know what
numbers need to be? Or that we know what we want numbers to be? Even if we cannot say with certainly what numbers are surely
we can say what we want and need for them to be. Do you agree?
Sagredo: I do.
Simplicio: And so do I.
Salviati: Then let us invent numbers anew, as if we’ve never seen them before, always keeping in mind those properties we need
for numbers to have. If we take this as a starting point then the question we need to address is, “What do we need numbers to be?”

10.1.2 https://math.libretexts.org/@go/page/7980
Sagredo: This is obvious! We need to be able to add them and we need to be able to multiply them together, and the result should
also be a number.
Simplicio: And subtract and divide too, of course.
Sagredo: I am not so sure we actually need these. Could we not define “subtract two from three” to be “add negative two to three”
and thus dispense with subtraction and division?
Simplicio: I suppose we can but I see no advantage in doing so. Why not simply have subtraction and division as we’ve always
known them?
Sagredo: The advantage is parsimony. Two arithmetic operations are easier to keep track of than four. I suggest we go forward with
only addition and multiplication for now. If we find we need subtraction or division we can consider them later.
Simplicio: Agreed. And I now see another advantage. Obviously addition and multiplication must not depend on order. That is, if x
and y are numbers then x + y must be equal to y + x and xy must be equal to yx. This is not true for subtraction, for 3 − 2 does
not equal 2 − 3 . But if we define subtraction as you suggest then this symmetry is preserved:
x + (−y) = (−y) + x (10.1.1)

Sagredo: Excellent! Another property we will require of numbers occurs to me now. When adding or multiplying more than two
numbers it should not matter where we begin. That is, if x, y and z are numbers it should be true that
(x + y) + z = x + (y + z) (10.1.2)

and
(x ⋅ y) ⋅ z = x ⋅ (y ⋅ z) (10.1.3)

Simplicio: Yes! We have it! Any objects which combine in these precise ways can be called numbers.
Salviati: Certainly these properties are necessary, but I don’t think they are yet sufficient to our purpose. For example, the number
1 is unique in that it is the only number which, when multiplying another number leaves it unchanged. For example: 1 ⋅ 3 = 3 . Or,

in general, if x is a number then 1 ⋅ x = x .


Sagredo: Yes. Indeed. It occurs to me that the number zero plays a similar role for addition: 0 + x = x .
Salviati: It does not seem to me that addition and multiplication, as we have defined them, force 1 or 0 into existence so I believe
we will have to postulate their existence independently.
Sagredo: Is this everything then? Is this all we require of numbers?
Simplicio: I don’t think we are quite done yet. How shall we get division?
Sagredo: In the same way that we defined subtraction to be the addition of a negative number, can we not define division to be
multiplication by a reciprocal? For example, 3 divided by 2 can be considered 3 multiplied by 1/2, can it not?
Salviati: I think it can. But observe that every number will need to have a corresponding negative so that we can subtract any
amount. And again nothing we’ve discussed so far forces these negative numbers into existence so we will have to postulate their
existence separately.
Simplicio: And in the same way every number will need a reciprocal so that we can divide by any amount.
Sagredo: Every number that is, except zero.
Simplicio: Yes, this is true. Strange is it not, that of them all only this one number needs no reciprocal? Shall we also postulate that
zero has no reciprocal?
Salviati: I don’t see why we should. Possibly ℵ is the reciprocal of zero. Or possibly not. But I see no need to concern ourselves
0

with things we do not need.


Simplicio: Is this everything then? Have we discovered all that we need for numbers to be?
Salviati: I believe there is only one property missing. We have postulated addition and we have postulated multiplication and we
have described the numbers zero and one which play similar roles for addition and multiplication respectively. But we have not
described how addition and multiplication work together. That is, we need a rule of distribution: If x, y and z are all numbers then
x ⋅ (y + z) = x ⋅ y + x ⋅ z . With this in place I believe we have everything we need.

10.1.3 https://math.libretexts.org/@go/page/7980
Simplicio: Indeed. We can also see from this that ℵ cannot be a number since, in the first place, it cannot be added to another
0

number and in the second, even if it could be added to a number the result is surely not also a number.
Salviati: My dear Simplicio, I fear you have missed the point entirely! Our axioms do not declare what a number is, only how it
behaves with respect to addition and multiplication with other numbers. Thus it is a mistake to presume that “numbers” are only
those objects that we have always believed them to be. In fact, it now occurs to me that “addition” and “multiplication” also
needn’t be seen as the operations we have always believed them to be.
For example suppose we have three objects, {a, b, c} and suppose that we define “addition” and “multiplication” by the following
tables:

+ a b c ⋅ a b c

a a b c a a a a
(10.1.4)
b b c a b a b c

c c a b c a c b

I submit that our set along with these definitions satisfy all of our axioms and thus a , b and c qualify to be called “numbers.”
Simplicio: This cannot be! There is no zero, no one!
Sagredo: But there is. Do you not see that a plays the role of zero – if you add it to any number you get that number back.
Similarly b plays the role of one.
This is astonishing! If a , b and c can be numbers then I am less sure than ever that I know what numbers are! Why, if we
replace a , b and c with Simplicio, Sagredo, and Salviati, then we become numbers ourselves!
Salviati: Perhaps we will have to be content with knowing how numbers behave rather than knowing what they are.
However I confess that I have a certain affection for the numbers I grew up with. Let us call those the “real” numbers. Any
other set of numbers, such as our {a, b, c} above we will call a field of numbers, since they seem to provide us with new
ground to explore. Or perhaps just a number field?
As we have been discussing this I have been writing down our axioms. They are stated below.
AXIOMS OF NUMBERS
Numbers are any objects which satisfy all of the following properties:
Definition of Operations: They can be combined by two operations, denoted “+” and “\cdot \).”
Closure: If x, y and z are numbers then x + y is also a number. x ⋅ y is also a number.
Commutativity: x + y = y + x
x⋅y =y⋅x

Associativity: (x + y) + z = x + (y + z)
(x ⋅ y) ⋅ z = x ⋅ (y ⋅ z)

Additive Identity: There is a number, denoted 0, such that for any number, x, x + 0 = x .
Multiplicative Identity: There is a number, denoted 1, such that for any number, x, 1 ⋅ x = x .
Additive Inverse: Given any number, x, there is a number, denoted −x, with the property that x + (−x) = 0 .
Multiplicative Inverse: Given any number, x ≠ 0 , there is a number, denoted x , with the property that x ⋅ x
−1 −1
=1 .
The Distributive Property: If x, y and z are numbers then x ⋅ (y + z) = x ⋅ y + x ⋅ z .
Sagredo: My friend, this is a thing of surpassing beauty! All seems clear to me now. Numbers are any group of objects which
satisfy our axioms. That is, a number is anything that acts like a number.
Salviati: Yes this seems to be true.
Simplicio: But wait! We have not settled the question: Is ℵ a number or not?
0

10.1.4 https://math.libretexts.org/@go/page/7980
Salviati: If everything we have just done is valid then ℵ could be a number. And so could ℵ , ℵ , ⋯ if we can find a way to
0 1 2

define addition and multiplication on the set {ℵ , ℵ , ℵ , ⋯} in a manner that agrees with our axioms.
0 1 2

Sagredo: An arithmetic of infinities! This is a very strange idea. Can such a thing be made sensible?
Simplicio: Not, I think, before lunch. Shall we retire to our meal?

 Exercise 10.1.1

Show that 0 ≠ 1 .

Hint
Show that if x ≠ 0 , then 0 ⋅ x ≠ x .

 Exercise 10.1.2

Consider the set of ordered pairs of integers: {(x, y)|s, y ∈ Z}, and define addition and multiplication as follows:
Addition: (a, b) + (c, d) = (ad + bc, bd)
Multiplication: (a, b) ⋅ (c, d) = (ac, bd) .
a. If we add the convention that

(ab, ad) = (b, d) (10.1.5)

show that this set with these operations forms a number field.
b. Which number field is this?

 Exercise 10.1.3
Consider the set of ordered pairs of real numbers, {(x, y)|x, y ∈ R}, and define addition and multiplication as follows:
Addition: (a, b) + (c, d) = (a + c, b + d)
Multiplication: (a, b) ⋅ (c, d) = (ac − bd, ad + bc)
a. Show that this set with these operations forms a number field.
b. Which number field is this?

This page titled 10.1: On the Nature of Numbers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

10.1.5 https://math.libretexts.org/@go/page/7980
10.2: Building the Real Numbers
 Learning Objectives

Show why building real numbers is logically necessary

Contrary to the title of this section we will not be rigorously building the real numbers here. Instead our goal is to show why such a
build is logically necessary, and to give a sense of some of the ways this has been accomplished in the past. This may seem odd
given our uniform emphasis on mathematical rigor, especially in the third part of the text, but there are very good reasons for this.
One is simple practicality. The fact is that rigorously building the real numbers and then showing that they have the required
properties is extraordinarily detailed work, even for mathematics. If we want to keep this text to a manageable size (we do), we
simply don’t have the room.
The second reason is that there is, as far as we know, very little for you to gain by it. When we are done we will have the real
numbers. The same real numbers you have been using all of your life. They have the same properties, and quirks, they’ve always
had. To be sure, they will not have lost any of their charm; They will be the same delightful mix of the mundane and the bizarre,
and they are still well worth exploring and getting to know better. But nothing we do in the course of building them up logically
from simpler ideas will help with that exploration.
A reasonable question then, is, “Why bother?” If the process is overwhelmingly, tediously detailed (it is) and gives us nothing new
for our efforts, why do it at all?
Doing mathematics has been compared1 to entering a dark room. At first you are lost. The layout of the room and furniture are
unknown so you fumble about for a bit and slowly get a sense of your immediate environs, perhaps a vague notion of the
organization of the room as a whole. Eventually, after much, often tedious exploration, you become quite comfortable in your
room. But always there will be dark corners; hidden areas you have not yet explored. Such a dark area may hide anything; the
latches to unopened doors you didn’t know were there; a clamp whose presence explains why you couldn’t move that little desk in
the corner; even the light switch that would allow you to illuminate an area more clearly than you would have imagined possible.
But, and this is the point, there is no way to know what you will find there until you walk into that dark corner and begin exploring.
Perhaps nothing. But perhaps something wonderful.

This is what happened in the late nineteenth century. The real numbers had been used since the Pythagoreans learned that √2 was
irrational. But really, most calculuations were (and still are) done with just the rational numbers. Moreover, since Q forms a “set of
measure zero,” it is clear that most of the real numbers had gone completely unused. The set of real numbers was thus one of those
“dark corners” of mathematics. It had to be explored.
“But even if that is true,” you might ask, “I have no interest in the logical foundations of the real numbers, especially if such
knowledge won’t tell me anything I don’t already know. Why do I need to know all of the details of constructing R from Q?"
The answer to this is very simple: You don’t.
That’s the other reason we’re not covering all of the details of this material. We will explain enough to light up, dimly perhaps, this
little corner of mathematics. Later, should you need (or want) to come back to this and explore further you will have a foundation
to start with. Nothing more.
Until the nineteenth century the geometry of Euclid, as given in his book The Elements, was universally regarded as the touchstone
of mathematical perfection. This belief was so deeply embedded in Western culture that as recently as 1923, Edna St. Vincent
Millay opened one of the poems in her book The Harp Weaver and Other Poems with the line “Euclid alone has looked on beauty
bare.”
Euclid begins his book by stating 5 simple axioms and proceeds, step by logical step, to build up his geometry. Although far from
actual perfection, his methods are clean, precise and efficient – he arrives at the Pythagorean Theorem in only 47 steps (theorems)
– and even today Euclid’s Elements still sets a very high standard of mathematical exposition and parsimony.
The goal of starting with what is clear and simple and proceeding logically, rigorously, to what is complex is still a guiding
principle of all mathematics for a variety of reasons. In the late nineteenth century, this principle was brought to bear on the real
numbers. That is, some properties of the real numbers that at first seem simple and intuitively clear turn out on closer examination,

10.2.1 https://math.libretexts.org/@go/page/7979
as we have seen, to be rather counter-intuitive. This alone is not really a problem. We can have counter-intuitive properties in our
mathematics – indeed, this is a big part of what makes mathematics interesting – as long as we arrive at them logically, starting
from simple assumptions the same way Euclid did.
Having arrived at a view of the real numbers which is comparable to that of our nineteenth century colleagues, it should now be
clear that the real numbers and their properties must be built up from simpler concepts as suggested by our Italian friends in the
previous section.
In addition to those properties we have discovered so far, both Q and R share another property which will be useful. We have used
it throughout this text but have not heretofore made it explicit. They are both linearly ordered. We will now make this property
explicit.

 Definition 10.2.1

A number field is said to be linearly ordered if there is a relation, denoted “<,” on the elements of the field which satisfies all
of the following for all x, y , and z in the field.
1. For all numbers x and y in the field, exactly one of the following holds:
a. x <y

b. x =y

c. y <x

2. If x < y , then x + z < y + z for all z in the field.


3. If x < y , and 0 < z , then x ⋅ z < y ⋅ z .
4. If x < y and y < z then x < z .

Any number field with such a relation is called a linearly ordered number field and as the following problem shows, not every
number field is linearly ordered.

 Exercise 10.2.1
a. Prove that the following must hold in any linearly ordered number field.
1. 0 < x if and only if −x < 0 .
2. If x < y and z < 0 then y ⋅ z < x ⋅ z .
3. For all x ≠ 0 , 0 < x .
2

4. 0 < 1 .
b. Show that the set of complex numbers (C) is not a linearly ordered field.

In a thorough, rigorous presentation we would now assume the existence of the natural numbers (N), and their properties and use
these to define the integers, (Z). We would then use the integers to define the rational numbers, (Q). We could then show that the
rationals satisfy the field axioms worked out in the previous section, and that they are linearly ordered.
Then – at last – we would use Q to define the real numbers (R), show that these also satisfy the field axioms and also have the
other properties we expect: Continuity, the Nested Interval Property, the Least Upper Bound Property, the Bolzano-Weierstrass
Theorem, the convergence of all Cauchy sequences, and linear ordering.
We would start with the natural numbers because they seem to be simple enough that we can simply assume their properties. As
Leopold Kronecker (1823-1891) said: “God made the natural numbers, all else is the work of man.”
Unfortunately this is rather a lot to fit into this epilogue so we will have to abbreviate the process rather severely.
We will assume the existence and properties of the rational numbers. Building Q from the integers is not especially hard and it is
easy to show that they satisfy the axioms worked out by Salviati, Sagredo and Simplicio in the previous section. But the level of
detail required for rigor quickly becomes onerous.
Even starting at this fairly advanced position in the chain of logic there is still a considerable level of detail needed to complete the
process. Therefore our exposition will necessarily be incomplete.

10.2.2 https://math.libretexts.org/@go/page/7979
Rather than display, in full rigor, how the real numbers can be built up from the rationals we will show, in fairly broad terms, three
ways this has been done in the past. We will give references later in case you’d like to follow up and learn more.

The Decimal Expansion


This is by far the most straightforward method we will examine. Since we begin with Q, we already have some numbers whose
decimal expansion is infinite. For example, = 0.333.... We also know that if x ∈ Qthen expressing x as a decimal gives either a
1

finite or a repeating infinite decimal.


More simply, we can say that Q consists of the set of all decimal expressions which eventually repeat. (If it eventually repeats zeros
then it is what we’ve called a finite decimal.)
We then define the real numbers to be the set of all infinite decimals, repeating or not.
It may feel as if all we have to do is define addition and multiplication in the obvious fashion and we are finished. This set with
these definitions obviously satisfy all of the field axioms worked out by our Italian friends in the previous section. Moreover it
seems clear that all of our equivalent completeness axioms are satisfied.
However, things are not quite as clear cut as they seem.
The primary difficulty in this approach is that the decimal representation of the real numbers is so familiar that everything we need
to show seems obvious. But stop and think for a moment. Is it really obvious how to define addition and multiplication of infinite
decimals? Consider the addition algorithm we were all taught in grade school. That algorithm requires that we line up two numbers
at their decimal points:
d1 d2 ⋅ d3 d4 (10.2.1)

+ δ1 δ2 ⋅ δ3 δ4

We then begin adding in the rightmost column and proceed to the left. But if our decimals are infinite we can’t get started because
there is no rightmost column!
A similar problem occurs with multiplication.
So our first problem is to define addition and multiplication in R in a manner that re-captures addition and multiplication in Q.
This is not a trivial task.
One way to proceed is to recognize that the decimal notation we’ve used all of our lives is really shorthand for the sum of an
infinite series. That is, if x = 0 ⋅ d d d . . . where 0 ≤ d ≤ 9 for all i ∈ N then
1 2 3 i


di
x =∑ (10.2.2)
i
i=1
10

∞ di ∞ δi
Addition is now apparently easy to define: If x = ∑ i=1 i
and y = ∑ i=1 i
then
10 10


ei
x +y = ∑ where ei = di + δi (10.2.3)
i
i=1
10

But there is a problem. Suppose for some j ∈ N , e = d j i + δi > 10 . In that case our sum does not satisfy the condition
ei
0 ≤ e ≤ 9 so it is not even clear that the expression ∑ represents a real number. That is, we may not have the closure

i i=1 i
10

property of a number field. We will have to define some sort of “carrying” operation to handle this.

 Exercise 10.2.2

Define addition on infinite decimals in a manner that is closed.

Hint
Find an appropriate “carry” operation for our definition.

A similar difficulty arises when we try to define multiplication. Once we have a notion of carrying in place, we could define
multiplication as just the multiplication of series. Specifically, we could define

10.2.3 https://math.libretexts.org/@go/page/7979
a1 a2 b1 b2
(0 ⋅ a1 a2 a3 ⋯) ⋅ (0 ⋅ b1 b2 b3 ⋯) =( + + ⋯) ⋅ ( + + ⋯)
2 2
10 10 10 10

a1 b1 a1 b2 + a2 b1 a1 b3 + a2 b2 + a3 b1
= + + +⋯
2 3 4
10 10 10

We could then convert this to a “proper” decimal using our carrying operation.
Again the devil is in the details to show that such algebraic operations satisfy everything we want them to. Even then, we need to
worry about linearly ordering these numbers and our completeness axiom.
Another way of looking at this is to think of an infinite decimal representation as a (Cauchy) sequence of finite decimal
approximations. Since we know how to add and multiply finite decimal representations, we can just add and multiply the individual
terms in the sequences. Of course, there is no reason to restrict ourselves to only these specific types of Cauchy sequences, as we
see in our next approach.

Cauchy Sequences
As we’ve seen, Georg Cantor began his career studying Fourier series and quickly moved on to more foundational matters in the
theory of infinite sets.
But he did not lose his fascination with real analysis when he moved on. Like many mathematicians of his time, he realized the
need to build R from Q. He and his friend and mentor Richard Dedekind (who’s approach we will see in the next section) both
found different ways to build R from Q.
Cantor started with Cauchy sequences in Q.
That is, we consider the set of all Cauchy sequences of rational numbers. We would like to define each such sequence to be a real
∞ – –
number. The goal should be clear. If (s ) n is a sequence in Q which converges to √2 then we will call (s ) the real number √2.
n=1 n

This probably seems a bit startling at first. There are a lot of numbers in (s ) (countably infinitely many, to be precise) and we are
n

proposing putting all of them into a big bag, tying it up in a ribbon, and calling the whole thing √2. It seems a very odd thing to
propose, but recall from the discusion in the previous section that we left the concept of “number” undefined. Thus if we can take
any set of objects and define addition and multiplication in such a way that the field axioms are satisfied, then those objects are
legitimately numbers. To show that they are, in fact, the real numbers we will also need the completeness property.
A bag full of rational numbers works as well as anything if we can define addition and multiplication appropriately.
Our immediate problem though is not addition or multiplication but uniqueness. If we take one sequence (s ) which converges to
n
– – –
√2 and define it to be √2, what will we do with all of the other sequences that converge to √2?

Also, we have to be careful not to refer to any real numbers, like the square root of two for example, as we define the real numbers.
This would be a circular – and thus useless – definition. Obviously though, we can refer to rational numbers, since these are the
tools we’ll be using.
– –
The solution is clear. We take all sequences of rational numbers that converge to √2, throw them into our bag and call that √2. Our
bag is getting pretty full now.

But we need to do this without using √2 because it is a real number. The following two definitions satisfy all of our needs.

 Definition 10.2.2
∞ ∞
Let x = (s )
n and y = (σ )
k=1 n be Cauchy sequences in Q. x and y are said to be equivalent if they satisfy the following
k=1

property: For every ε > 0 , ε ∈ Q , there is a rational number N such that for all n > N , n ∈ N ,
| sn − σn | < ε (10.2.4)

We will denote equivalence by writing, x ≡ y .

10.2.4 https://math.libretexts.org/@go/page/7979
 Exercise 10.2.3

Show that:
a. x ≡ x
b. x ≡ y ⇒ y ≡ x
c. x ≡ y and y ≡ z ⇒ x ≡ z

 Definition 10.2.3

Every set of all equivalent Cauchy sequences defines a real number.

A very nice feature of Cantor’s method is that it is very clear how addition and multiplication should be defined.

 Definition 10.2.4

If
∞ ∞
x = { (sn ) ∣ (sn ) is Cauchy in Q} (10.2.5)
k=1 k=1

and
∞ ∞
y = { (σn ) ∣ (σn ) is Cauchy in Q} (10.2.6)
k=1 k=1

then we define the following:


Addition:

x + y = { (tn )k=1 ∣ tk = sk + σk , ∀(sn )ϵ x, and (σn )ϵ y} (10.2.7)

Multiplication:

x ⋅ y = { (tn ) ∣ tk = sk σk , ∀(sn )ϵ x, and (σn )ϵ y} (10.2.8)
k=1

The notation used in Definition 10.2.3 can be difficult to read at first, but basically it says that addition and multiplication are done
component-wise. However since x and y consist of all equivalent sequences we have to take every possible choice of (s ) ∈ x and n

(σ ) ∈ y , form the sum (product)(s + σ ) ) and then show that all such \text{sums (products)}\) are equivalent.
∞ ∞
n n n ((s σ ) n n
n=1 n=1

Otherwise \text{addition (multiplication)}\) is not well-defined: It would depend on which sequence we choose to represent x and
y.

 Exercise 10.2.4

Let x and y be real numbers in Q (that is, let them be sets of equivalent Cauchy sequences). If (s ) and (t ) are in n n x and (σ )
n

and (τ ) are in y then


n

∞ ∞
(sn + tn ) ≡ (σn + τn ) (10.2.9)
n=1 n=1

 Theorem 10.2.1

Let 0∗ be the set of Cauchy sequences in Q which are all equivalent to the sequence (0, 0, 0, . . . ). Then

0 ∗ +x = x (10.2.10)

Proof
From Problem 10.2.4it is clear that in forming 0 ∗ +x we can choose any sequence in 0∗ to represent 0∗ and any sequence
in x to represent x . (This is because any other choice will yield a sequence equivalent to 0 ∗ +x .)
Thus we choose (0, 0, 0, . . . )to represent 0∗ and any element of x , say (x 1, , to represent x . Then
x2 , . . . )

(0, 0, 0, . . . ) + (x1 , x2 , x3 , . . . ) = (x1 , x2 , x3 , . . . )

=x

10.2.5 https://math.libretexts.org/@go/page/7979
Since any other sequences taken from 0∗ and x respectively, will yield a sum equivalent to x (see Problem ) we
10.2.3

conclude that

0 ∗ +x = x (10.2.11)

 Exercise 10.2.5

Identify the set of equivalent Cauchy sequences, 1∗, such that 1 ∗ ⋅x = x .

 Exercise 10.2.6

Let x, y , and z be real numbers (equivalent sets of Cauchy sequences). Show that with addition and multiplication defined as
above we have:
a. x + y = y + x
b. (x + y) + z = x + (y + z)
c. x ⋅ y = y ⋅ x
d. (x ⋅ y) ⋅ z = x ⋅ (y ⋅ z)
e. x ⋅ (y + z) = x ⋅ y + x ⋅ z

Once the existence of additive and multiplicative inverses is established 2 the collection of all sets of equivalent Cauchy sequences,
with addition and mulitiplication defined as above satisfy all of the field axioms. It is clear that they form a number field and thus
deserve to be called numbers.
However this does not necessarily show that they form R. We also need to show that they are complete in the sense of Chapter 7. It
is perhaps not too surprising that when we build the real numbers using equivalent Cauchy sequences the most natural
completeness property we can show is that if a sequence of real numbers is Cauchy then it converges.
However we are not in a position to show that Cauchy sequences in R converge. To do this we would first need to show that these
sets of equivalence classes of Cauchy sequences (real numbers) are linearly ordered.
Unfortunately showing the linear ordering, while not especially hard, is time consuming. So we will again invoke the prerogatives
of the teacher and brush all of the difficulties aside with the assertion that it is straightforward to show that the real numbers as we
have constructed them in this section are linearly ordered and are complete. If you would like to see this construction in full rigor
we recommend the book, The Number System by H. A. Thurston [16].3

Dedekind Cuts
An advantage of building the reals via Cauchy sequences in the previous section is that once we’ve identified equivalent sequences
with real numbers it is very clear how addition and multiplication should be defined.
On the other hand, before we can even start to understand that construction, we need a fairly strong sense of what it means for a
sequence to converge and enough experience with sequences to be comfortable with the notion of a Cauchy sequence. Thus a good
deal of high level mathematics must be mastered before we can even begin.
The method of “Dedekind cuts” first developed by Richard Dedekind (though he just called them “cuts”) in his 1872 book,
Continuity and the Irrational Numbers shares the advantage of the Cauchy sequence method in that, once the candidates for the real
numbers have been identified, it is very clear4 how addition and multiplication should be defined. It is also straightforward to show
that most of the field axioms are satisfied.
In addition, Dedekind’s method also has the advantage that very little mathematical knowledge is required to get started. This is
intentional. In the preface to the first edition of his book, Dedekind states:
This memoir can be understood by anyone possessing what is usually called common sense; no technical philosophic, or
mathematical, knowledge is in the least degree required. (quoted in [5])
While he may have overstated his case a bit, it is clear that his intention was to argue from very simple first principles just as Euclid
did.

10.2.6 https://math.libretexts.org/@go/page/7979
His starting point was the observation we made in Chapter 1: The rational number line is full of holes. More precisely we can “cut”
the rational line in two distinct ways:
1. We can pick a rational number, r. This choice divides all other rational numbers into two classes: Those greater than r and those
less than r.
2. We can pick one of the holes in the rational number line. In this case all of the rational fall into two classes: Those greater than
the hole and those less.
But to speak of rational numbers as less than or greater than something that is not there is utter nonsense. We’ll need a better (that
is, a rigorous) definition.
As before we will develop an overall sense of this construction rather than a fully detailed presentation, as the latter would be far
too long to include.
Our presentation will closely follow that of Edmund Landau’s in his classic 1951 text Foundations of Analysis [7]. We do this so
that if you choose to pursue this construction in more detail you will be able to follow Landau’s presentation more easily.

 Definition 10.2.5: Dedekind Cut

A set of positive5 rational numbers is called a cut if


Property: It contains a positive rational number but does not contain all positive rational numbers.
Property II: Every positive rational number in the set is less than every positive rational number not in the set.
Property III: There is no element of the set which is greater than every other element of the set.

Given their intended audiences, Dedekind and Landau shied away from using too much notation. However, we will include the
following for those who are more comfortable with the symbolism as it may help provide more perspective. Specifically the
properties defining a Dedekind cut α can be written as follows.
Property I: α ≠ ∅ and Q +
−α ≠ ∅ .
Property II: If x ∈ α and y ∈ Q +
−α , then x < y . (Alternatively, if x ∈ α and y < x , then y ∈ α .)
Property III: If x ∈ α , then ∃z ∈ α such that x < z .
Properties I-III really say that Dedekind cuts are bounded open intervals of rational numbers starting at 0. For example,
(0, 3) ∩ Q
+
is a Dedekind cut (which will eventually be the real number 3). Likewise, {x|x < 2} ∩ Q is a Dedekind cut (which
2 +


will eventually be the real number √2). Notice that care must be taken not to actually refer to irrational numbers in the properties
as the purpose is to construct them from rational numbers, but it might help to ground you to anticipate what will happen.
Take particular notice of the following three facts:
1. Very little mathematical knowledge is required to understand this definition. We need to know what a set is, we need to know
what a rational number is, and we need to know that given two positive rational numbers either they are equal or one is greater.
2. The language Landau uses is very precise. This is necessary in order to avoid such nonsense as trying to compare something
with nothing like we did a couple of paragraphs up.
3. We are only using the positive rational numbers for our construction. The reason for this will become clear shortly. As a
practical matter for now, this means that the cuts we have just defined will (eventually) correspond to the positive real numbers.

 Definition 10.2.6
Let α and β be cuts. Then we say that α is less than β, and write
α <β (10.2.12)

if there is a rational number in β which is not in α .

Note that, in light of what we said prior to Definition 10.2.1 (which is taken directly from Landau), we notice the following.

10.2.7 https://math.libretexts.org/@go/page/7979
 Theorem 10.2.2
Let α and β be cuts. Then α < β if and only if α ⊂ β .

 Exercise 10.2.7
Prove Theorem 10.2.2 and use this to conclude that if α and β are cuts then exactly one of the following is true:
a. α = β
b. α < β
c. β < α

We will need first to define addition and multiplication for our cuts and eventually these will need to be extended of R (once the
non-positive reals have also been constructed). It will be necessary to show that the extended definitions satisfy the field axioms.
As you can see there is a lot to do.
As we did with Cauchy sequences and with infinite decimals, we will stop well short of the full construction. If you are interested
in exploring the details of Dedekind’s construction, Landau’s book [7] is very thorough and was written with the explicit intention
that it would be accessible to students. In his “Preface for the Teacher” he says
I hope that I have written this book, after a preparation stretching over decades, in such a way that a normal student can read
it in two days.
This may be stretching things. Give yourself at least a week and make sure you have nothing else to do that week.
Addition and multiplication are defined in the obvious way.

 Definition 10.2.7: Addition on cuts

Let α and β be cuts. We will denote the set {x + y|x ∈ α, y ∈ β} by α + β .

 Definition 10.2.8: Multiplication on cuts

Let α and β be cuts. We will denote the set {xy|x ∈ α, y ∈ β} by αβ or α ⋅ β .

If we are to have a hope that these objects will serve as our real numbers we must have closure with respect to addition and
multiplication. We will show closure with respect to addition.

 Theorem 10.2.3: Closure with Respect to Addition

If α and β are cuts then α + β is a cut.

Proof
We need to show that the set α + β satisfies all three of the properties of a cut.
Proof of Property I
Let x be any rational number in α and let x be a rational number not in α . Then by Property II x < x .
1 1

Let y be any rational number in β and let y be a rational number not in β. Then by Property II y < y .
1 1

Thus since x + y represents a generic element of α + β and x + y < x 1 + y1 , it follows that x


1 + y1 /
∈α + β .
Proof of Property II
We will show that the contrapositive of Property II is true: If x ∈ α + β and y < x then y ∈ α + β .
y
First, let x ∈ α + β . Then there are x α ∈ α and x
β ∈ β such that y < x = x α + xβ . Therefore xα +xβ
<1 , so that

y
xα ( ) < xα (10.2.13)
xα + xβ

10.2.8 https://math.libretexts.org/@go/page/7979
and
y
xβ ( ) < xβ (10.2.14)
xα + xβ

y y
Therefore x α (
xα +xβ
)ϵ α and x
β (
xα +xβ
)ϵ β .

Therefore
y y
y = xα ( ) + xβ ( ) ϵ α +β (10.2.15)
xα + xβ xα + xβ

Proof of Property III


Let z ∈ α + β . We need to find w > z , w ∈ α + β . Observe that for some x ∈ α and y ∈ β

z = x +y (10.2.16)

Since α is a cut, there is a rational number x 1 ∈ α such that x 1 >x . Take w = x1 +y ∈ α +β . Then

w = x1 + y > x + y = z (10.2.17)

This completes the proof of this theorem.

 Exercise 10.2.8

Show that if α and β are cuts then α ⋅ β is also a cut.

At this point we have built our cuts and we have defined addition and multiplication for cuts. However, as observed earlier the cuts
we have will (very soon) correspond only to the positive real numbers. This may appear to be a problem but it really isn’t because
the non-positive real numbers can be defined in terms of the positives, that is, in terms of our cuts. We quote from Landau [7]:
These cuts will henceforth be called the “positive numbers;” .
We create a new number 0 (to be read “zero”), distinct from the positive numbers.
We also create numbers which are distinct from the positive numbers as well as distinct from zero, and which we will call
negative numbers, in such a way that to each ξ (I.e. to each positive number) we assign a negative number denoted by −ξ (−
to be read “minus”). In this, −ξ and −ν will be considered as the same number (as equal) if and only if ξ and ν are the same
number.
The totality consisting of all positive numbers, of 0, and of all negative numbers, will be called the real numbers.
Of course it is not nearly enough to simply postulate the existence of the non-negative real numbers. All we have so far is a set of
objects we’re calling the real numbers.
For some of them (the positive reals6) we have defined addition and multiplication. These definitions will eventually turn out to
correspond to the addition and multiplication we are familiar with.
However we do not have either operation for our entire set of proposed real numbers. Before we do this we need first to define the
absolute value of a real number. This is a concept you are very familiar with and you have probably seen the following definition:
Let α ∈ R . Then

α if α ≥ 0
|α| = { (10.2.18)
−α if α < 0

Unfortunately we cannot use this definition because we do not yet have a linear ordering on R so the statement α ≥0 is
meaningless. Indeed, it will be our definition of absolute value that orders the real numbers. We must be careful.
Notice that by definition a negative real number is denoted with the dash (’-’) in front. That is χ is positive while −χ is negative.
Thus if A is any real number then one of the following is true:
1. A = χ for some χ ∈ R (A is positive)

10.2.9 https://math.libretexts.org/@go/page/7979
2. A = −χ for some χ ∈ R (A is negative)
3. A = 0
We define absolute value as follows:

 Definition 10.2.9

Let A ∈ R as above. Then

⎧χ if A = χ

|A| = ⎨ 0 if A = 0 (10.2.19)

χ if A = −χ

With this definition in place it is possible to show that mathbbR is linearly ordered. We will not do this explicitly. Instead we will
simply assume that the symbols “<” “>,” and “=” have been defined and have all of the properties we have learned to expect
from them.
We now extend our definitions of addition and multiplication from the positive real numbers (cuts) to all of them. Curiously,
multiplication is the simpler of the two.

 Definition 10.2.10: Multiplication


Let α , β ∈ R. Then

⎧ − |α| |β| if α > 0, β < 0 or α < 0, β > 0

α ⋅ β = ⎨ |α| |β| if α < 0, β > 0 (10.2.20)



0 if α = 0 or β = 0

Notice that the case where α and β are both positive was already handled by Definition 10.2.8 because in that case they are both
cuts.
Next we define addition.

 Definition 10.2.11: Addition

Let α , β ∈ R. Then

⎧ −(|α| + |β|) if α < 0, β < 0





⎪ |α| − |β|
⎪ if α > 0, β < 0, |α| > |β|



⎪0 if α > 0, β < 0, |α| = |β|

α + β = ⎨ −(|α| − |β|) if α > 0, β < 0, |α| < |β| (10.2.21)



⎪ β +α if α < 0, β > 0




⎪β if α = 0



α if β = 0

But wait! In the second and fourth cases of our definition we’ve actually defined addition in terms of subtraction. 7 But we haven’t
defined subtraction yet! Oops!
This is handled with the definition below, but it illuminates very clearly the care that must be taken in these constructions. The real
numbers are so familiar to us that it is extraordinarily easy to make unjustified assumptions.
Since the subtractions in the second and fourth cases above are done with positive numbers we only need to give meaning to the
subtraction of cuts.

 Definition 10.2.12

If α , β and δ are cuts then the expression


α −β = δ (10.2.22)

10.2.10 https://math.libretexts.org/@go/page/7979
is defined to mean
α = δ+β (10.2.23)

Of course, there is the detail of showing that there is such a cut δ . (We warned you of the tediousness of all this.) Landau goes
through the details of showing that such a cut exists. We will present an alternative by defining the cut α − β directly (assuming
β < α ). To motivate this definition, consider something we are familiar with: 3 − 2 = 1 . In terms of cuts, we want to say that the

open interval from 0 to 3 “minus” the open interval from 0 to 2 should give us the open interval from 0 to 1. Taking elements from
(0, 3) and subtracting elements from (0, 2) won’t do it as we would have differences such as 2.9 − 0.9 = 2 which is not in the cut

(0, 1). A moment’s thought tells us that what we need to do is take all the elements from (0, 3) and subtract all the elements from

(2, ∞), restricting ourselves only to those which are positive rational numbers. This prompts the following definition.

 Definition 10.2.13

Let α and β be cuts with β < α . Define α − β as follows:


+
α − β = {x − y|x ∈ α and y /
∈β} ∩ Q (10.2.24)

To show that, in fact, β + (α − β) = α , the following technical lemma will be helpful.

 Lemma 10.2.1

Let β be a cut, y and z be positive rational numbers not in β with y < z , and let ε > 0 be any rational number. Then there exist
positive rational numbers r and s with r ∈ β , and s /
∈β , such that s < z , and s − r < ε .

 Exercise 10.2.9

Prove Lemma 10.2.1.

Hint
s1 +r1
Since β is a cut there exists r ∈ β . Let s = y /
1 1 ∈β . We know that r < s < z . Consider the midpoint
1 1
2
. If this is in
β then relabel it as r and relabel s as s . If it is not in β then relabel it as s and relabel r as r , etc.
2 1 2 2 1 2

 Exercise 10.2.10
Let α and β be cuts with β < α . Prove that β + (α − β) = α .

Hint
It is pretty straightforward to show that β + (α − β) ⊆ α . To show that α ⊆ β + (α − β) , we let x ∈ α . Since β < α ,
we have y ∈ α with y / ∈β . We can assume without loss of generality that x < y . (Why?) Choose z ∈ α with y < z . By the

Lemma 10.2.1, there exists positive rational numbers r and s with r ∈ β , s ∈ β , s < z , and s − r < z − x . Show that
x < r + (z − s) .

We will end by saying that no matter how you construct the real number system, there is really only one. More precisely we have
the following theorem which we state without proof.8

 Theorem 10.2.4
Any complete, linearly ordered field is isomorphic9 to R.

Remember that we warned you that these constructions were fraught with technical details that are not necessarily illuminating.
Nonetheless, at this point, you have everything you need to show that the set of all real numbers as defined above is linearly
ordered and satisfies the Least Upper Bound property.
But we will stop here in order, to paraphrase Descartes, to leave for you the joy of further discovery.

10.2.11 https://math.libretexts.org/@go/page/7979
References
1 By Andrew Wiles, the man who proved Fermat’s Last Theorem.
2
We will not address this issue here, but you should give some thought to how this might be accomplished.
3
Thurston first builds R as we’ve indicated in this section. Then as a final remark he shows that the real numbers must be exactly
the infinite decimals we saw in the previous section.
4
“Clear” does not mean “easy to do” as we will see.
5
Take special notice that we are not using the negative rational numbers or zero to build our cuts. The reason for this will become
clear shortly.
6 That is, the cuts.
7
Notice also that the fifth case refers to the addition as defined in the second case.
8
In fact, not proving this result seems to be standard in real analysis references. Most often it is simply stated as we’ve done here.
However a proof can be found at http://math.ucr.edu/ res/math205A/uniqreals.pdf.
9
Two linearly ordered number fields are said to be isomorphic if there is a one-to-one, onto mapping between them (such a
mapping is called a bijection) which preserves addition, multiplication, and order. More precisely, if F and F are both linearly
1 2

ordered fields, x, y ∈ F and φ : F → F is the mapping then


1 1 2

1. φ(x + y) = φ(x) + φ(y)


2. φ(x ⋅ y) = φ(x) ⋅ φ(y)
3. x < y ⇒ φ(x) < φ(y)

This page titled 10.2: Building the Real Numbers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by
Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a
detailed edit history is available upon request.

10.2.12 https://math.libretexts.org/@go/page/7979
Index
A I product rule
Abel's theorem Infinite Polynomials 2.1: Newton and Leibniz Get Started
8.4: Boundary Issues and Abel’s Theorem 2.2: Power Series as Infinite Polynomials
archimedean property Integers R
1.1: Real and Rational Numbers 1.1: Real and Rational Numbers radius of convergence
integral form of the remainder for the 8.3: Radius of Convergence of a Power Series
B Taylor series Rational Numbers
binomial series 1.1: Real and Rational Numbers
5.1: The Integral Form of the Remainder
5.2: Lagrange’s Form of the Remainder Intermediate Value Theorem Real Numbers
Brachistochrone 1.1: Real and Rational Numbers
7.2: Proof of the Intermediate Value Theorem
7.1: Completeness of the Real Number System
2.1: Newton and Leibniz Get Started
Reverse Triangle Inequality
L 4.2: The Limit as a Primary Tool
C Lagrange’s Form of the Remainder
Cantor’s theorem 5.2: Lagrange’s Form of the Remainder S
9.2: Infinite Sets LEAST UPPER BOUND PROPERTY Supremum Value Theorem
9.3: Cantor’s Theorem and Its Consequences 7.4: The Supremum and the Extreme Value Theorem
Cauchy sequences 7.4: The Supremum and the Extreme Value Theorem
Leibniz 7.E: Intermediate and Extreme Values (Exercises)
8.2: Uniform Convergence- Integrals and Derivatives 2.1: Newton and Leibniz Get Started
Cauchy’s Form of the Remainder limit
5.3: Cauchy’s Form of the Remainder
T
4.2: The Limit as a Primary Tool
continuity 6.3: The Denition of the Limit of a Function
Taylor Expansion
3.1: Taylor’s Formula
6.1: An Analytic Denition of Continuity
5: Convergence of the Taylor Series- A “Tayl” of
M Three Remainders
D Maclaurin series topologist’s sine curve
Dedekind Cuts 3.1: Taylor’s Formula 6.2: Sequences and Continuity
10.2: Building the Real Numbers triangle inequality
divergence of a series N 4.2: The Limit as a Primary Tool
4.3: Divergence of a Series 5.1: The Integral Form of the Remainder
newton
diverging series 2.1: Newton and Leibniz Get Started
Trigonometric Series
3.2: Series Anomalies 9.1: Trigonometric Series
P
E partial sum
U
Extreme Value Theorem 4.1: Sequences of Real Numbers
Ultimate Ratios
7.4: The Supremum and the Extreme Value Theorem 6.3: The Denition of the Limit of a Function
power series
7.E: Intermediate and Extreme Values (Exercises)
2.2: Power Series as Infinite Polynomials
Uniform Convergence
8.1: Uniform Convergence

1 https://math.libretexts.org/@go/page/38017
Index
A I product rule
Abel's theorem Infinite Polynomials 2.1: Newton and Leibniz Get Started
8.4: Boundary Issues and Abel’s Theorem 2.2: Power Series as Infinite Polynomials
archimedean property Integers R
1.1: Real and Rational Numbers 1.1: Real and Rational Numbers radius of convergence
integral form of the remainder for the 8.3: Radius of Convergence of a Power Series
B Taylor series Rational Numbers
binomial series 1.1: Real and Rational Numbers
5.1: The Integral Form of the Remainder
5.2: Lagrange’s Form of the Remainder Intermediate Value Theorem Real Numbers
Brachistochrone 1.1: Real and Rational Numbers
7.2: Proof of the Intermediate Value Theorem
7.1: Completeness of the Real Number System
2.1: Newton and Leibniz Get Started
Reverse Triangle Inequality
L 4.2: The Limit as a Primary Tool
C Lagrange’s Form of the Remainder
Cantor’s theorem 5.2: Lagrange’s Form of the Remainder S
9.2: Infinite Sets LEAST UPPER BOUND PROPERTY Supremum Value Theorem
9.3: Cantor’s Theorem and Its Consequences 7.4: The Supremum and the Extreme Value Theorem
Cauchy sequences 7.4: The Supremum and the Extreme Value Theorem
Leibniz 7.E: Intermediate and Extreme Values (Exercises)
8.2: Uniform Convergence- Integrals and Derivatives 2.1: Newton and Leibniz Get Started
Cauchy’s Form of the Remainder limit
5.3: Cauchy’s Form of the Remainder
T
4.2: The Limit as a Primary Tool
continuity 6.3: The Denition of the Limit of a Function
Taylor Expansion
3.1: Taylor’s Formula
6.1: An Analytic Denition of Continuity
5: Convergence of the Taylor Series- A “Tayl” of
M Three Remainders
D Maclaurin series topologist’s sine curve
Dedekind Cuts 3.1: Taylor’s Formula 6.2: Sequences and Continuity
10.2: Building the Real Numbers triangle inequality
divergence of a series N 4.2: The Limit as a Primary Tool
4.3: Divergence of a Series 5.1: The Integral Form of the Remainder
newton
diverging series 2.1: Newton and Leibniz Get Started
Trigonometric Series
3.2: Series Anomalies 9.1: Trigonometric Series
P
E partial sum
U
Extreme Value Theorem 4.1: Sequences of Real Numbers
Ultimate Ratios
7.4: The Supremum and the Extreme Value Theorem 6.3: The Denition of the Limit of a Function
power series
7.E: Intermediate and Extreme Values (Exercises)
2.2: Power Series as Infinite Polynomials
Uniform Convergence
8.1: Uniform Convergence
Glossary
Sample Word 1 | Sample Definition 1

1 https://math.libretexts.org/@go/page/51375
Detailed Licensing
Overview
Title: Real Analysis (Boman and Rogers)
Webpages: 60
Applicable Restrictions: Noncommercial
All licenses found:
CC BY-NC-SA 4.0: 88.3% (53 pages)
Undeclared: 11.7% (7 pages)

By Page
Real Analysis (Boman and Rogers) - CC BY-NC-SA 4.0 5: Convergence of the Taylor Series- A “Tayl” of Three
Front Matter - CC BY-NC-SA 4.0 Remainders - CC BY-NC-SA 4.0
Prelude to Real Analysis - CC BY-NC-SA 4.0 5.1: The Integral Form of the Remainder - CC BY-
TitlePage - Undeclared NC-SA 4.0
InfoPage - Undeclared 5.2: Lagrange’s Form of the Remainder - CC BY-NC-
TitlePage - CC BY-NC-SA 4.0 SA 4.0
InfoPage - CC BY-NC-SA 4.0 5.3: Cauchy’s Form of the Remainder - CC BY-NC-
Table of Contents - Undeclared SA 4.0
Licensing - Undeclared 5.E: Convergence of the Taylor Series- A “Tayl” of
Table of Contents - Undeclared Three Remainders (Exercises) - CC BY-NC-SA 4.0
1: Numbers - Real (ℝ) and Rational (ℚ) - CC BY-NC-SA 6: Continuity - What It Isn’t and What It Is - CC BY-NC-
4.0 SA 4.0
1.1: Real and Rational Numbers - CC BY-NC-SA 4.0 6.1: An Analytic Denition of Continuity - CC BY-
1.E: Numbers - Real (ℝ) and Rational (ℚ) NC-SA 4.0
(Exercises) - CC BY-NC-SA 4.0 6.2: Sequences and Continuity - CC BY-NC-SA 4.0
2: Calculus in the 17th and 18th Centuries - CC BY-NC- 6.3: The Denition of the Limit of a Function - CC
SA 4.0 BY-NC-SA 4.0
6.4: The Derivative - An Afterthought - CC BY-NC-
2.1: Newton and Leibniz Get Started - CC BY-NC-SA
SA 4.0
4.0
6.E: Continuity - What It Isn’t and What It Is
2.2: Power Series as Infinite Polynomials - CC BY-
(Exercises) - CC BY-NC-SA 4.0
NC-SA 4.0
2.E: Calculus in the 17th and 18th Centuries 7: Intermediate and Extreme Values - CC BY-NC-SA 4.0
(Exercises) - CC BY-NC-SA 4.0 7.1: Completeness of the Real Number System - CC
3: Questions Concerning Power Series - CC BY-NC-SA BY-NC-SA 4.0
4.0 7.2: Proof of the Intermediate Value Theorem - CC
BY-NC-SA 4.0
3.1: Taylor’s Formula - CC BY-NC-SA 4.0
7.3: The Bolzano-Weierstrass Theorem - CC BY-NC-
3.2: Series Anomalies - CC BY-NC-SA 4.0
SA 4.0
3.E: Questions Concerning Power Series (Exercises) -
7.4: The Supremum and the Extreme Value Theorem
CC BY-NC-SA 4.0
- CC BY-NC-SA 4.0
4: Convergence of Sequences and Series - CC BY-NC-SA 7.E: Intermediate and Extreme Values (Exercises) -
4.0 CC BY-NC-SA 4.0
4.1: Sequences of Real Numbers - CC BY-NC-SA 4.0 8: Back to Power Series - CC BY-NC-SA 4.0
4.2: The Limit as a Primary Tool - CC BY-NC-SA 4.0
8.1: Uniform Convergence - CC BY-NC-SA 4.0
4.3: Divergence of a Series - CC BY-NC-SA 4.0
8.2: Uniform Convergence- Integrals and Derivatives
4.E: Convergence of Sequences and Series
- CC BY-NC-SA 4.0
(Exercises) - CC BY-NC-SA 4.0

1 https://math.libretexts.org/@go/page/115356
8.3: Radius of Convergence of a Power Series - CC 10: Epilogue to Real Analysis - CC BY-NC-SA 4.0
BY-NC-SA 4.0 10.1: On the Nature of Numbers - CC BY-NC-SA 4.0
8.4: Boundary Issues and Abel’s Theorem - CC BY- 10.2: Building the Real Numbers - CC BY-NC-SA 4.0
NC-SA 4.0 Back Matter - CC BY-NC-SA 4.0
9: Back to the Real Numbers - CC BY-NC-SA 4.0
Index - CC BY-NC-SA 4.0
9.1: Trigonometric Series - CC BY-NC-SA 4.0 Index - Undeclared
9.2: Infinite Sets - CC BY-NC-SA 4.0 Glossary - CC BY-NC-SA 4.0
9.3: Cantor’s Theorem and Its Consequences - CC Detailed Licensing - Undeclared
BY-NC-SA 4.0

2 https://math.libretexts.org/@go/page/115356

You might also like