\input fremtex
%\fullsize
\smallprint
\filename{ma203.tex}
\versiondate{14.5.02}
\def\lectureend#1{\discrversionA{\hfill{\twelverm #1}}{}}
\def\omitted#1{\discrversionA{\hfill{\twelverm omitted #1}}{}}
\ifamstexsupported\font\twelverm=cmr12\else\font\twelverm=cmr10 at 12
pt\fi
\Centerline{\bf MA203 Real Analysis}
\Centerline{\smc D.H.Fremlin}
\oldfootnote{}{These notes are made available to students on the
understanding that they have not been fully checked for errors. They
are not intended to provide a substitute for attendance at lectures.
If you notice a mistake, please tell
the lecturer!}
\medskip
This course is intended as a first introduction to the methods required
to prove the basic theorems of real analysis, particularly those
involving continuous and differentiable functions. The theorems
themselves will mostly have been presented in MA206, and should be more
or less familiar. The techniques of proof, however, are likely to be
completely new. They are difficult, and you must expect to have to
work hard.
`Analysis' is one of the traditional subjects taught in university
mathematics degrees -- necessarily so, as it underlies not only most of
modern pure mathematics but also large parts of modern applied
mathematics -- and nearly everywhere gives rise to special problems.
Many students who are quite good at other kinds of mathematics find that
analysis seems to be beyond them. I think that this is for two
separate reasons. The first, and less important, one is that analysis
deals with {\it inequalities}. It may well be that an actual majority
of the formulae in the notes below will have at least one of the symbols
$\le$, $\ge$, $<$ or $>$. These are not really difficult to handle,
but they do require technical skills which you may not yet have
practised enough. So you are going to have to spend some time working
on these. But there is a much more essential difficulty to come to
terms with. All the characteristic sentences of analysis have multiple
quantifiers. `Quantifiers' are the expressions $\Forall$, `for every',
and $\Exists$, `there is'. Of course these appear everywhere in
mathematics. But in analysis we get sentences like
\inset{`for every $\epsilon>0$ there is a $\delta>0$ such that
$|f(x)-f(x_0)|\le\epsilon$ for every $x\in[x_0-\delta,x_0+\delta]$'}
\noindent with three quantifiers
(`$\Forall\epsilon\ldots\Exists\delta\ldots\Forall x\ldots$') one after
the other; and I think some of the sentences later on may have as many
as five. These demand some special ways of thinking, which is what
this course is mostly about.
\bigskip
\noindent{\bf The real number system}
\medskip
The whole of this course will be concerned with the set $\Bbb R$ of
`real numbers'. I will therefore run as quickly as possible over the
key properties of this system. All the facts here are supposed to be
familiar, and I'm not going to prove any of them; I write them out just
so that you can make sure you really do know the things I am going to
assume from now on.
\medskip
\noindent{\bf Addition} For any two real numbers $x$ and $y$, we have a
real number $x+y$, and
\inset{$x+(y+z)=(x+y)+z$ for all real numbers $x$, $y$ and $z$;
there is a real number $0$ such that $x+0=0+x=x$ for every $x\in\Bbb R$;
for every $x\in\Bbb R$ there is a number $-x\in\Bbb R$ such that
$x+(-x)=(-x)+x=0$;
$x+y=y+x$ for all $x$, $y\in\Bbb R$.}
\medskip
\noindent{\bf Multiplication}
For any two real numbers $x$ and $y$, we have a real number $xy$, and
\inset{$x(yz)=(xy)z$ for all real numbers $x$, $y$ and $z$;
there is a real number $1$ such that $x1=1x=x$ for every $x\in\Bbb R$;
for every $x\in\Bbb R\setminus\{0\}$ there is a number $\Bover1x\in\Bbb
R$ such that $x\cdot\Bover1x=\Bover1x\cdot x=1$;
$xy=yx$ for all $x$, $y\in\Bbb R$;
$1\ne 0$.}
\medskip
\noindent{\bf The distributive laws}
\inset{$x(y+z)=xy+xz$ for all $x$, $y$, $z\in\Bbb R$;
$(x+y)z=xz+yz$ for all $x$, $y$, $z\in\Bbb R$.}
\medskip
\noindent{\bf The ordering of $\Bbb R$} We have a relation $\le$ on
$\Bbb R$ such that
\inset{$x\le x$ for every $x\in\Bbb R$,
if $x\le y$ and $y\le z$ then $x\le z$,
if $x\le y$ and $y\le x$ then $x=y$,
for every $x$, $y\in\Bbb R$, either $x\le y$ or $y\le x$.}
\medskip
\noindent{\bf $\le$, $+$ and $\times$} If $x$, $y$, $z\in\Bbb R$, then
\inset{if $x\le y$ then $x+z\le y+z$ and $x-z\le y-z$;
if $x\le y$ and $0\le z$ then $xz\le yz$;
if $x\le y$ and $z\le 0$ then $yz\le xz$.}
\medskip
This list is not quite complete, and there is one further essential fact
about real numbers (`Dedekind completeness') which will play a large
part in this course. But I will deal with it later when we can give it
the time it deserves.
\medskip
\noindent{\bf Chains of inequalities} It will often happen that we have
a string of inequalities
\Centerline{$a\le b\le c\le d\le\ldots$,}
\noindent meaning `$a\le b$ and $b\le c$ and $c\le d$ and $\ldots$'.
In this case, because $\le$ is transitive, we can say
\inset{because $a\le b$ and $b\le c$, $a\le c$,
because $a\le c$ and $c\le d$, $a\le d$,}
\noindent and so on. It sometimes happens that the chain returns to
its starting point, as in
\Centerline{$a\le b\le c\le d\le e\le a$.}
\noindent In this case, at the end, we get $a\le e$ and $e\le a$; so we
must have $a=e$. But this means that we have $a\le d$ and $d\le a$, so
$a=d$, and so on; in the end, we conclude that $a=b=c=d=e$.
\medskip
\noindent{\bf Moduli} For any $x\in\Bbb R$, we write
$$\eqalign{|x|&=x\text{ if }0\le x,\cr
&=-x\text{ if }x\le 0;\cr}$$
\noindent alternatively, we can define $|x|$ as $\max(x,-x)$, that is,
$$\eqalign{|x|&=x\text{ if }-x\le x,\cr
&=-x\text{ if }x\le -x.\cr}$$
\noindent Now the most important fact of all about moduli is the {\bf
triangle inequality}:
\Centerline{$|x+y|\le|x|+|y|$ for all $x$, $y\in\Bbb R$.}
\noindent This is not the kind of thing I am going to spend the course
teaching you how to prove, but the argument is instructive and fairly
easy, so here it is: if $x$, $y\in\Bbb R$, then
\inset{$x\le|x|$ because $|x|=\max(x,-x)$,
so $x+y\le|x|+y$, because you can add anything to an inequality,
also $y\le|y|$, so $|x|+y\le|x|+|y|$,
since $x+y\le|x|+y\le|x|+|y$, $x+y\le|x|+|y|$.
Now $|-x|=\max(-x,-(-x))=\max(-x,x)=|x|$, and similarly $|-y|=|y|$,
so $-(x+y)=(-x)+(-y)\le|-x|+|-y|=|x|+|y|$,
and since $|x+y|$ must be either $x+y$ or $-(x+y)$, we must have
$|x+y|\le|x|+|y|$.}
\noindent (The {\it proof} here is non-examinable. But the {\it fact}
$|x+y|\le|x|+|y|$ is part of the survival kit for the course.)
Two more fundamental facts about moduli are
\Centerline{$|xy|=|x||y|$ for all $x$, $y\in\Bbb R$,}
\Centerline{$|x|\ge 0$ and $|x|=0\iff x=0$.}
\medskip
\noindent{\bf Manipulating moduli} Apart from the three basic facts
above, we are going to need quite a few others. Among these are
\Centerline{$|x|-|y|\le|x-y|$ for all $x$, $y\in\Bbb R$.}
\noindent (To see this, note that $|x|=|(x-y)+y|\le|x-y|+|y|$ and
subtract $|y|$ from both sides.) In the same way
\Centerline{$|y|-|x|\le|y-x|=|x-y|$.}
\noindent But putting these together we have
\Centerline{$\bigl||x|-|y|\bigr|=\max(|x|-|y|,|y|-|x|)\le|x-y|$.}
\noindent Another on the list is
\Centerline{$|x-y|=|x+(-y)|\le|x|+|-y|=|x|+|y|$.}
A particularly important fact is that
\Centerline{if $00$, $|x_n-a|\le\epsilon$
for all $n$ large enough.}
\noindent How large will `large enough' be? Of course this will depend
on $\epsilon$. But we are claiming that there is {\it some} number
$M$ such that $|x_n-a|\le\epsilon$ whenever $n\ge M$.
Let me write this out again in a more concentrated form.
\Centerline{$\lim_{n\to\infty}x_n=a$}
\noindent means
\Centerline{$\Forall\epsilon>0\Exists n_0\in\Bbb N\Forall n\ge n_0$,
$|x_n-a|\le\epsilon$.}
\noindent This is a typical definition from elementary analysis. It is
really quite difficult to get hold of. We have three quantifiers
$\Forall$, $\Exists$, $\Forall$; these have to be put in the right
order; and any variation of any symbol in the whole formula is liable
to make it wrong.
\medskip
{\bf The Analysis Game} I believe that one way, and a useful way, of
coping with these sentences is to treat them as specifying a game. The
formula
\Centerline{$\lim_{n\to\infty}x_n=a$,
\quad$\Forall\epsilon>0\Exists n_0\in\Bbb N\Forall n\ge n_0$,
$|x_n-a|\le\epsilon$}
\noindent means
\Centerline{for every $\epsilon>0$,
$\Exists n_0\in\Bbb N\Forall n\ge n_0$, $|x_n-a|\le\epsilon$.}
\noindent So if I claim that $\lim_{n\to\infty}x_n=a$, what I am saying
is that for any $\epsilon>0$ which {\it you} (not I) choose, there will
be some $n_0$ such that $|x_n-a|\le\epsilon$ for every $n\ge n_0$. Now
after you have chosen the $\epsilon$, I only have to find one $n_0$
which will work; and I (not you) have the right to point to a number
which I believe is good enough. But I am then claiming that
`$|x_n-a|\le\epsilon$ for every $n\ge n_0$', so {\it you} have the right
to challenge me by naming a particular $n\ge n_0$ and demanding that we
check that $|x_n-a|$ is indeed at most $\epsilon$.
Thus we have the idea of a {\it game} with four moves, in which
\inset{Player I says `$\lim_{n\to\infty}x_n=a$',
Player II chooses some $\epsilon$, and s/he must choose $\epsilon>0$,
Player I chooses some $n_0\in\Bbb N$,
Player II chooses some $n\in\Bbb N$, and s/he must choose $n\ge n_0$,
and at the end of these four moves, the players look at $|x_n-a|$, and
if $|x_n-a|\le\epsilon$ then Player I wins (because that's what s/he
said would happen), while if $|x_n-a|\not\le\epsilon$ then Player II
wins.}
\noindent The actual statement `$\lim_{n\to\infty}x_n=a$' is {\it true}
if, and only if, Player I can always win the game, whatever Player II
does. Of course Player I will have to take care to play the right move
when s/he comes to pick $n_0$; even if the original move
`$\lim_{n\to\infty}x_n=a$' left him/her in a winning position, s/he can
still bungle it by being careless with the second move.
\medskip
{\bf Example} Suppose Player I starts with
\Centerline{$\lim_{n\to\infty}\Bover1n=0$.}
\noindent Will s/he win? Player II can choose any $\epsilon$ s/he
likes, and it's generally good tactics to pick something small; suppose
s/he tries $\epsilon=\Bover1{1000}$. Player I does a little quick
thinking: `at the end, I shall need
$|\Bover1n-0|\le\epsilon=\Bover1{1000}$; now $|\Bover1n-0|$ is just
$\Bover1n$; how can I be sure (remembering that it's Player II who gets
to choose $n$) that s/he will choose $n$ so that
$\Bover1n\le\Bover1{1000}$? well, I'd better make sure that
$n\ge 1000$ -- can I do this? of course! I'll say that $n_0=1000$'.
So s/he does just that: $n_0=1000$. Now Player II has another move;
but whatever s/he does, s/he has to pick $n\ge 1000$, so that
$|\Bover1n-0|=\Bover1n\le\Bover1{1000}=\epsilon$, and Player I will win.
This is what happens if Player II plays $\epsilon=\Bover1{1000}$.
Could s/he have done any better? The trouble is that Player I is
liable to answer with $n_0=\Bover1{\epsilon}$. And if s/he does this,
then for any $n\ge n_0$ we shall have
$|\Bover1n-0|=\Bover1n\le\Bover1{n_0}=\epsilon$. Is there any way of
stopping him/her from playing $n_0=\Bover1{\epsilon}$? Actually, there
is, because if you look at the rules you see that $n_0$ has got to be an
integer. So if we take $\epsilon=\Bover1{1000\pi}$, Player I can't
answer with $n_0=\Bover1{\epsilon}=1000\pi$, because that's not an
integer (it's somewhere between 3141 and 3142). But this does us no
good, because Player I doesn't have to hit any particular mark exactly.
If s/he just chooses {\it some} integer $n_0\ge\Bover1{\epsilon}$ (e.g.,
$n_0=4000$ if $\epsilon=\Bover1{1000\pi}$), then for any $n\ge n_0$ we
shall still have $\Bover1n\le\Bover1{n_0}\le\epsilon$, and Player I will
still win in the end.
Note that Player I must absolutely not tell Player II what his/her
second move is going to be before Player II has committed to a
particular $\epsilon$. Because if Player I gets lazy, and just says
`$n_0=10^{100}$' without looking at what $\epsilon$ was, then Player II
can say `$\epsilon=\exp(-2^{10})$' for his/her first move and then
`$n=n_0$' for his/her second, and when they come to do the sums they
will get
\Centerline{$|\Bover1n-0|=\Bover1n=10^{-100}>e^{-300}>e^{-1024}$}
\noindent and Player II will win.
\medskip
{\bf Example} Suppose that Player I starts by saying that
\Centerline{`$\lim_{n\to\infty}(-1)^n=1$'.}
\noindent Will s/he win? Suppose that Player II again tries a
moderately small $\epsilon>0$ -- e.g., $\epsilon=\Bover1{1000}$, and
Player I answers with his favourite $n_0=10^{100}$. Now Player II has
to pick $n\ge n_0$. S/he does a bit of rough working:
$$\eqalign{|(-1)^n-1|&=|1-1|=0\le\Bover1{1000}\text{ if }n\text{ is
even},\cr
&=|(-1)^n-1|=2\not\le\Bover1{1000}\text{ if }n\text{ is odd}.\cr}$$
\noindent So Player II wants to choose an {\it odd} number $n$. Can
s/he do it? Yes: $n=10^{100}+1$ will do nicely, because the only rules
she has to look out for are $n\in\Bbb N$, $n\ge n_0$. Is there
anything Player I could have done to stop this? No, because whatever
$n_0$ s/he picks, Player II just has to choose $n=n_0$ (if $n_0$ is odd)
or $n=n_0+1$ (if $n_0$ is even) and win. So Player I scuppered his/her
chances right at the beginning by saying that $\sequencen{(-1)^n}$
converges to $1$; this is just not true.
Of course in this case, Player II didn't have to take $\epsilon$ nearly
as small as $\Bover1{1000}$. Any number strictly less than $2$ would
have done; so s/he could have started with $\epsilon=1$, for instance,
showing that Player I wasn't just wrong, but had completely
misunderstood the sequence.
\lectureend{02/2}
\bigskip
\noindent{\bf The first theorem} I will now try to show how these ideas
can be assembled into a rigorous proof of a familiar fact.
\medskip
{\bf Theorem} If $\lim_{n\to\infty}x_n=b$ and $\lim_{n\to\infty}y_n=c$
then $\lim_{n\to\infty}x_n+y_n=b+c$.
\medskip
\noindent{\bf proof} I do not think the ideas of this proof can really
make sense without going through them at least twice, once in the order
in which one remembers them, and once in the order in which one writes
them out.
We are in the position of a Player I who has announced, as her first
move,
\Centerline{$\lim_{n\to\infty}x_n+y_n=b+c$.}
\noindent The theorem says that she can win from this position. If
Player II believes that the theorem is true, I suppose he will resign.
But if he is sceptical, he will choose an $\epsilon>0$ and challenge
Player I to find a successful move in reply. If we are to be sure that
Player I really can win whatever Player II does, we are going to have to
describe a {\it strategy} for
Player I; a set of rules to tell Player I what to do in any of the
positions which Player II can manoeuvre her into. Since we have no
idea what Player II is going to say for his first move (except that it
must be to choose some $\epsilon>0$), the proof more or less has to
begin with
\inset{Let $\epsilon>0$.}
\noindent At this point, Player I is going to have to think ahead.
Because Player II has another move; and Player I has to find an $n_0$
so large that Player II's subsequent choice of $n$ is certain to lose.
So the proof is going to have to look like this:
\inset{Let $\epsilon>0$.\hfill[Player II's first move.]
$\ldots$
Take $n_0$ such that $\ldots$\hfill[Player I's second move.]}
\noindent (Maybe we shall be able to specify a formula for $n_0$; but
we can't count on this, and maybe it will have to be chosen by some
seriously complicated process.) But we know what will come next,
because we know that Player II will choose some $n\ge n_0$; and since
we have no control over this at all, we shall just have to write it out
like that:
\inset{Let $\epsilon>0$.\hfill[Player II's first move.]
$\ldots$
Take $n_0$ such that $\ldots$\hfill[Player I's second move.]
Let $n\ge n_0$.\hfill[Player II's second move.]}
\noindent At this point, the players have finished their moves, and
proceed to see who has won. Now the question they have to decide is
\inset{is $|(x_n+y_n)-(b+c)|\le\epsilon$?}
\noindent because Player I will win if the answer is `yes', and lose if
the answer is `no'.
Everything I have written so far is just a matter of understanding the
structure of the game. It's got nothing to do with the actual
sequences involved. But we have come to the crunch. In order to be
sure that
$|(x_n+y_n)-(b+c)|\le\epsilon$, we need to know something about the
numbers we're looking at. And the {\bf key fact} is that
\Centerline{$|(x_n+y_n)-(b+c)|\le|x_n-b|+|y_n-c|$;}
\noindent this is just a simple application of the triangle inequality,
because $|(x_n+y_n)-(b+c)|=|(x_n-b)+(y_n-c)|\le|x_n-b|+|y_n-c|$; the
modulus of the sum is less than or equal to the sum of the moduli. So
it will be good enough if we can find some reason to be sure that $|x_n-
b|+|y_n-c|\le\epsilon$.
Why should this be so? It's helpful to try to think about what is
supposed to be happening here. We are supposing that
$\lim_{n\to\infty}x_n=b$ and that $\lim_{n\to\infty}y_n=c$ and that
$n\ge n_0$, and presumably Player I chose $n_0$ to be large; in which
case, surely, we shall have $x_n\bumpeq b$ and $y_n\bumpeq c$ in some
sense. So maybe we can actually arrange that
$|x_n-b|\le$ (something) and $|y_n-c|\le$ (something). What
`something' should we try, if we want the sum to be at most $\epsilon$?
There are two pieces, so if we take each of them to be
$\Bover{\epsilon}2$ that ought to work.
Thus our proof (growing slowly) might have a final line
\inset{$|(x_n+y_n)-(b+c)|\le|x_n-b|+|y_n-c|
\le\Bover{\epsilon}2+\Bover{\epsilon}2=\epsilon$\hfill[Player I wins.]}
\noindent But of course we are going to have to work at this a bit.
How can we justify this idea that
\Centerline{if $n\ge n_0$, then $|x_n-b|\le\Bover{\epsilon}2$ and
$|y_n-c|\le\Bover{\epsilon}2$?}
\noindent Well, remember that we know that $\lim_{n\to\infty}x_n=b$, and
also that $\Bover{\epsilon}2>0$. So we know that there is some integer
-- call it $n_1$ -- such that $|x_n-b|\le\Bover{\epsilon}2$ whenever
$n\ge n_1$. (It might happen that the $n_1$ here will itself serve for
the $n_0$ we're looking for. But we mustn't count on it, so we had
better give it a new name.) Similarly, there is some $n_2\in\Bbb N$
such that $|y_n-c|\le\Bover{\epsilon}2$ for every $n\ge n_2$. Putting
these into the framework of the proof we are building, it looks like
this:
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $n_1\in\Bbb N$ such that $|x_n-b|\le\Bover{\epsilon}2$ for
every
$n\ge n_1$.
There is an $n_2\in\Bbb N$ such that $|y_n-c|\le\Bover{\epsilon}2$ for
every
$n\ge n_2$.
Take $n_0$ such that $\ldots$.
Let $n\ge n_0$.
$\ldots$
So $|(x_n+y_n)-(b+c)|\le|x_n-b|+|y_n-c|
\le\Bover{\epsilon}2+\Bover{\epsilon}2=\epsilon$.}
What do we need to do to fill this in? Well, we are going to need some
good reason why $|x_n-b|\le\Bover{\epsilon}2$ whenever $n\ge n_0$.
However, we picked $n_1$ so that $|x_n-b|\le\Bover{\epsilon}2$ whenever
$n\ge n_1$. So if $n_0\ge n_1$, that will do fine. Next, we also
need to know that
$|y_n-c|\le\Bover{\epsilon}2$ whenever $n\ge n_0$. Since we picked
$n_2$ so that $|y_n-c|\le\Bover{\epsilon}2$ whenever $n\ge n_2$, we
shall be successful if $n_0\ge n_2$. Thus what Player I needs to do is
to choose some $n_0$ such that $n_0\ge n_1$ and $n_0\ge n_2$. The most
straightforward way of doing this is to make $n_0$ actually the greater
of the two numbers. So the proof becomes
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $n_1\in\Bbb N$ such that $|x_n-b|\le\Bover{\epsilon}2$ for
every
$n\ge n_1$.
There is an $n_2\in\Bbb N$ such that $|y_n-c|\le\Bover{\epsilon}2$ for
every
$n\ge n_2$.
Take $n_0=\max(n_1,n_2)$.
Let $n\ge n_0$.
Then $n\ge n_1$ and $n\ge n_2$, so $|x_n-b|\le\Bover{\epsilon}2$ and
$|y_n-c|\le\Bover{\epsilon}2$ and
\Centerline{$|(x_n+y_n)-(b+c)|\le|x_n-b|+|y_n-c|
\le\Bover{\epsilon}2+\Bover{\epsilon}2=\epsilon$.}
Since this works whatever $\epsilon>0$ and $n\ge n_0$ Player II chooses,
Player I is sure to win and the theorem is true.}
\medskip
{\bf Remarks} I took a great deal of time over this. The points I am
trying to make are
\inset{you don't think of proofs in the order in which you finally write
them out;
you aim to set up the {\it structure} of a proof before filling in very
much of the detail.}
\noindent It won't be clear to you yet, but the proof here has a number
of features which are so common that they're worth putting in your bag
of tricks:
\inset{It starts with `Let $\epsilon>0$'}
\noindent (because the statement of the theorem is Player I's first
move, and we have to give Player II a free choice). It does
occasionally happen that it's worth taking a bit of space to clear the
ground before we ask Player II what he wants to do, but it can never be
actually wrong to get $\epsilon$ into view before we start thinking
about Player I's strategy.
\inset{$n_0=\max(n_1,n_2)$.}
\noindent This is by no means a universal rule; but it is exceedingly
common that when we come to choose $n_0$, we just make it the biggest
integer which has yet appeared.
\inset{$|x_n+y_n-b-c|\le|x_n-b|+|y_n-c|$.}
\noindent When you want to prove that a {\it sum} of any kind is less
than or equal to something, then on most (not all) occasions it's worth
breaking it up into pieces and using the triangle inequality. Of
course it's not always obvious which pieces to use. For instance, it's
perfectly true that
\Centerline{$|(x_n+y_n)-(b+c)|\le|x_n-c|+|y_n-b|$,}
\noindent and this fact is no use at all. However, the original
hypotheses
`$\lim_{n\to\infty}x_n=b$, $\lim_{n\to\infty}y_n=c$' should be a hint
that we want to keep the $x_n$ with $b$ and the $y_n$ with $c$.
\medskip
{\bf The second theorem} I expect you feel that learning forty proofs
like the one above will make this a pretty hard term. So it would, if
you had to learn them all independently. But in fact they run so close
together that for many of them we only have to remember odd clauses in
which they differ from another one. I will try to show this by giving
a result which looks very different from the one here, but uses almost
exactly the same ideas, if we look at it in the right way.
This theorem will be the same result, but for real functions instead of
for sequences. Now there is an extra complication here, so I pause for
a pair of definitions.
\medskip
{\bf Definitions (a)} A {\bf real function} is a function $f$ such that
$\dom f\subseteq\Bbb R$ and $f(x)\in\Bbb R$ for every $x\in\dom f$.
(The point here is that a very large proportion of the important
functions of mathematics aren't defined everywhere, starting with
$\Bover1x$, undefined at $0$. We have to have some way of coping with
this, and in the last fifty years it's become generally accepted that we
should specify the {\it domain} of every function at the moment when we
introduce it.)
\medskip
{\bf (b)} If $f$ is a real function and $a\in\Bbb R$, then
`$\lim_{x\to\infty}f(x)=a$' means
\Centerline{$\Forall\epsilon>0\Exists M\in\Bbb R\Forall x\ge M,\,
x\in\dom f$ and $|f(x)-a|\le\epsilon$.}
\noindent Compare this with the definition of
`$\lim_{n\to\infty}x_n=a$':
\Centerline{$\Forall\epsilon>0\Exists n_0\in\Bbb N\Forall n\ge n_0$,
$|x_n-a|\le\epsilon$.}
\noindent The $n_0\in\Bbb N$ has turned into $M\in\Bbb R$, the $n\ge
n_0$ has turned into $x\ge M$, and $x_n$ has turned into $f(x)$. And
there is a new complication, because when I was dealing with sequences I
took it for granted that $x_n$ would be defined for every relevant $n$.
But when dealing with general real functions, one can't take that sort
of thing without checking carefully; and before saying `$|f(x)-
a|\le\epsilon$' we have to say `$f(x)$ is defined'.
\medskip
{\bf (c)} If $f$ and $g$ are real functions, then $f+g$ is the real
function defined by saying
\Centerline{$\dom(f+g)=\dom f\cap\dom g$,
\quad$(f+g)(x)=f(x)+g(x)$ for every $x\in\dom(f+g)$.}
\noindent (Of course the {\it idea} is in the formula
$(f+g)(x)=f(x)+g(x)$.
The domain $\dom(f+g)$ specified is just the set on which we can
calculate both $f(x)$ and $g(x)$ and use the formula.)
Now the theorem is this:
\medskip
{\bf Theorem} If $f$ and $g$ are real functions and
$\lim_{x\to\infty}f(x)=b$, $\lim_{x\to\infty}g(x)=c$ then
$\lim_{x\to\infty}(f+g)(x)=b+c$.
\medskip
\noindent{\bf proof} What I am going to do is to go through the proof
which worked for sequences, make as few changes as possible, and see if
it still works. Here goes:
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.}
\noindent No problems so far.
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $n_1\in\Bbb N$ such that $|x_n-b|\le\Bover{\epsilon}2$ for
every
$n\ge n_1$.}
\noindent Change `$n_1\in\Bbb N$' into `$M_1\in\Bbb R$', `$x_n$' into
`$f(x)$',
`$n\ge n_1$' into `$x\ge M_1$':
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $M\in\Bbb R$ such that $|f(x)-b|\le\Bover{\epsilon}2$ for
every
$x\ge M_1$.}
\noindent This won't quite do, because we are talking about $f(x)$
before we've said whether there is such a number; we had better change
to
\inset{There is an $M_1\in\Bbb R$ such that $x\in\dom f$ and
$|f(x)-b|\le\Bover{\epsilon}2$ for every $x\ge M_1$.}
\noindent Carry on with the next line of the original proof:
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $M_1\in\Bbb R$ such that $x\in\dom f$ and
$|f(x)-b|\le\Bover{\epsilon}2$ for every $x\ge M_1$.
There is an $n_2\in\Bbb N$ such that $|y_n-c|\le\Bover{\epsilon}2$ for
every
$n\ge n_2$.}
\noindent Translate this line in the same way as the one above:
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $M_1\in\Bbb R$ such that $x\in\dom f$ and
$|f(x)-b|\le\Bover{\epsilon}2$ for every $x\ge M_1$.
There is an $M_2\in\Bbb R$ such that $x\in\dom g$ and
$|g(x)-c|\le\Bover{\epsilon}2$ for every $x\ge M_2$.}
\noindent Look at the next line:
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $M_1\in\Bbb R$ such that $x\in\dom f$ and
$|f(x)-b|\le\Bover{\epsilon}2$ for every $x\ge M_1$.
There is an $M_2\in\Bbb R$ such that $x\in\dom g$ and
$|g(x)-c|\le\Bover{\epsilon}2$ for every $x\ge M_2$.
Take $n_0=\max(n_1,n_2)$.}
\noindent We want $M$'s not $n$'s this time:
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $M_1\in\Bbb R$ such that $x\in\dom f$ and
$|f(x)-b|\le\Bover{\epsilon}2$ for every $x\ge M_1$.
There is an $M_2\in\Bbb R$ such that $x\in\dom g$ and
$|g(x)-c|\le\Bover{\epsilon}2$ for every $x\ge M_2$.
Take $M=\max(M_1,M_2)$.}
\noindent Carry on:
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $M_1\in\Bbb R$ such that $x\in\dom f$ and
$|f(x)-b|\le\Bover{\epsilon}2$ for every $x\ge M_1$.
There is an $M_2\in\Bbb R$ such that $x\in\dom g$ and
$|g(x)-c|\le\Bover{\epsilon}2$ for every
$x\ge M_2$.}
Take $M=\max(M_1,M_2)$.
Let $n\ge n_0$.
Then $n\ge n_1$ and $n\ge n_2$, so $|x_n-b|\le\Bover{\epsilon}2$ and
$|y_n-c|\le\Bover{\epsilon}2$ and
\Centerline{$|(x_n+y_n)-(b+c)|\le|x_n-b|+|y_n-c|
\le\Bover{\epsilon}2+\Bover{\epsilon}2=\epsilon$.}
\noindent This becomes
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $M_1\in\Bbb R$ such that $x\in\dom f$ and
$|f(x)-b|\le\Bover{\epsilon}2$ for every $x\ge M_1$.
There is an $M_2\in\Bbb R$ such that $x\in\dom g$ and
$|g(x)-c|\le\Bover{\epsilon}2$ for every $x\ge M_2$.}
Take $M=\max(M_1,M_2)$.
Let $x\ge M$.
Then $x\ge M_1$ and $x\ge M_2$, so $x\in\dom f$ and $x\in\dom g$ and
$|f(x)-b|\le\Bover{\epsilon}2$ and
$|g(x)-c|\le\Bover{\epsilon}2$ and
\Centerline{$|(f(x)+g(x))-(b+c)|\le|f(x)-b|+|g(y)-c|
\le\Bover{\epsilon}2+\Bover{\epsilon}2=\epsilon$.}
\noindent There is only one thing missing: we want `$(f+g)(x)$' in the
last line instead of `$f(x)+g(x)$'. They are nearly the same thing,
except that before talking about `$(f+g)(x)$' we ought to explain why
$x\in\dom(f+g)$. But this is easy:
\inset{Let $\epsilon>0$. Then $\Bover{\epsilon}2>0$.
There is an $M_1\in\Bbb R$ such that $x\in\dom f$ and
$|f(x)-b|\le\Bover{\epsilon}2$ for every $x\ge M_1$.
There is an $M_2\in\Bbb R$ such that $x\in\dom g$ and
$|g(x)-c|\le\Bover{\epsilon}2$ for every
$x\ge M_2$.}
Take $M=\max(M_1,M_2)$.
Let $x\ge M$.
Then $x\ge M_1$ and $x\ge M_2$, so $x\in\dom f$ and $x\in\dom g$ and
$x\in\dom(f+g)$. Also
$|f(x)-b|\le\Bover{\epsilon}2$ and
$|g(x)-c|\le\Bover{\epsilon}2$, so
\Centerline{$|(f+g)(x)-(b+c)|=|(f(x)+g(x))-(b+c)|\le|f(x)-b|+|f(y)-c|
\le\Bover{\epsilon}2+\Bover{\epsilon}2=\epsilon$.}
\noindent And there is the proof, complete.
\bigskip
\noindent{\bf The next theorem} For a couple of lectures now I shall be
producing variations on the arguments above. I am going to ask you to
learn them. The way to learn them is by comparing and contrasting.
You will find that each proof has one or two points in which it differs
from the others, and many more points in which it is almost the same.
For my first example of this, let me give another result for which you
know the fact but not the proof.
\medskip
{\bf Theorem} If $\lim_{n\to\infty}x_n=b$ and $\lim_{n\to\infty}y_n=c$,
$\lim_{n\to\infty}x_ny_n=bc$.
\medskip
\noindent{\bf proof} I shall try to keep as close as possible to
the line used in the proof for $\lim_{n\to\infty}x_n+y_n=b+c$. Try
this:
\inset{Let $\epsilon>0$.
There is an $n_1\in\Bbb N$ such that $|x_n-b|\le ??$ whenever
$n\ge n_1$.
There is an $n_2\in\Bbb N$ such that $|y_n-c|\le ??$ whenever
$n\ge n_2$.
Set $n_0=\max(n_1,n_2)$.
If $n\ge n_0$, then $n\ge n_1$ and $n\ge n_2$ so $|x_n-b|\le??$ and
$|y_n-c|\le??$ and
\Centerline{$|x_ny_n-bc|\le ?? \le\epsilon$.}}
\noindent Now you see that I've put ?? in place of $\Bover{\epsilon}2$,
because it's hardly credible that the same formula will work. And in
the last line we have a completely new problem to solve, because at this
point we can expect to know something about $x_n-b$ and $y_n-c$, but
it's not at all clear how to turn this into something about $x_ny_n-bc$.
So here we need a new fact, which is
\Centerline{$x_ny_n-bc=(x_n-b)(y_n-c)+b(y_n-c)+c(x_n-b)$.}
\noindent This means that if $x_n-b$ and $y_n-c$ are both practically
zero, the right-hand-side will be a sum of three small pieces and the
left-hand-side will also be nearly zero. But just how small do $x_n-b$
and $y_n-c$ have to be? At this point it makes things much easier to
use a technical trick which will be very useful elsewhere. Since we
don't know what to put in the line
\inset{There is an $n_1\in\Bbb N$ such that $|x_n-b|\le ??$ whenever
$n\ge n_1$,}
\noindent invent a name for it; $\eta$, say. Then those two lines
will read
\inset{There is an $n_1\in\Bbb N$ such that $|x_n-b|\le\eta$ whenever
$n\ge n_1$.
There is an $n_2\in\Bbb N$ such that $|y_n-c|\le\eta$ whenever
$n\ge n_2$,}
\noindent and later on we shall have
\inset{If $n\ge n_0$, then $n\ge n_1$ and $n\ge n_2$ so $|x_n-b|\le\eta$
and $|y_n-c|\le\eta$.}
\noindent Now the point of this method is to be absolutely clear where
$\eta$ came from. This section is part of Player I's scheming after
Player II has chosen $\epsilon$. So is it {\it Player I} who chooses
$\eta$, probably in a way depending on $\epsilon$, as a preliminary to
looking for $n_1$ and $n_2$ and putting these together to produce $n_0$.
Suppose he does this. Then, in the last line, we shall have
\Centerline{$|x_ny_n-bc|=|(x_n-b)(y_n-c)+b(y_n-c)+c(x_n-b)|$.}
\noindent Now there is an absolutely standard way of treating this.
Whenever you have the modulus of a sum, and you want to show that it is
less than or equal to something, the first thing to try is the sum of
the moduli:
$$\eqalign{|x_ny_n-bc|&=|(x_n-b)(y_n-c)+b(y_n-c)+c(x_n-b)|\cr
&\le|(x_n-b)(y_n-c)|+|b(y_n-c)|+|c(x_n-b)|.\cr}$$
\noindent Next, given the modulus of a product, it is usually right to
rewrite it as the product of the moduli:
$$\eqalign{|x_ny_n-bc|&=|(x_n-b)(y_n-c)+b(y_n-c)+c(x_n-b)|\cr
&\le|(x_n-b)(y_n-c)|+|b(y_n-c)|+|c(x_n-b)|\cr
&=|x_n-b||y_n-c|+|b||y_n-c|+|c||x_n-b|.\cr}$$
\noindent Remember that by the time we've got this far, we have
$|x_n-b|\le\eta$ (whatever $\eta$ may be) and $|y_n-c|\le\eta$. So
we get
$$\eqalign{|x_ny_n-bc|&=|(x_n-b)(y_n-c)+b(y_n-c)+c(x_n-b)|\cr
&\le|(x_n-b)(y_n-c)|+|b(y_n-c)|+|c(x_n-b)|\cr
&=|x_n-b||y_n-c|+|b||y_n-c|+|c||x_n-b|\cr
&\le\eta\cdot\eta+|b|\eta+|c|\eta\cr
&=\eta(\eta+|b|+|c|).\cr}$$
We shall therefore be home, with a win for Player I, if she can arrange
that $\eta(\eta+|b|+|c|)\le\epsilon$. Well, if she can solve
quadratics, she will actually be able to get
$\eta(\eta+|b|+|c|)=\epsilon$. But, first, this is a bother, and,
second, there are difficulties with proving that numbers have square
roots; so I much prefer to use a more primitive argument. Remember
that Player I chooses $\eta$ immediately after seeing the $\epsilon$
chosen by Player II. Player I is allowed to choose any $\eta>0$.
($\eta$ really must be strictly greater than $0$ or it may not be true
that $|x_n-b|$ and $|y_n-c|$ are less than or equal to $\eta$ for enough
$n$.) In particular, Player I is certainly allowed to insist that
$\eta\le 1$. Now if $\eta\le 1$, we just need
$\eta(1+|b|+|c|)\le\epsilon$, that is,
$\eta\le\Bover{\epsilon}{1+|b|+|c|}$. So if Player I takes $\eta$ to
be $\min(1,\Bover{\epsilon}{1+|b|+|c|})$, she should win.
Let's write the whole proof out on this basis.
\inset{Let $\epsilon>0$.\hfill[Player I's first move.]
Set $\eta=\min(1,\Bover{\epsilon}{1+|b|+|c|})>0$.
\hfill[Player II is getting into position.]
There is an $n_1\in\Bbb N$ such that $|x_n-b|\le\eta$ whenever
$n\ge n_1$.
There is an $n_2\in\Bbb N$ such that $|y_n-c|\le\eta$ whenever
$n\ge n_2$.
Set $n_0=\max(n_1,n_2)$.\hfill[Player I's second move.]
If $n\ge n_0$, \hfill[Player II's second move]
then $n\ge n_1$ and $n\ge n_2$ so $|x_n-b|\le\eta$ and
$|y_n-c|\le\eta$ and
$$\eqalignno{|x_ny_n-bc|&=|(x_n-b)(y_n-c)+b(y_n-c)+c(x_n-b)|\cr
&\le|(x_n-b)(y_n-c)|+|b(y_n-c)|+|c(x_n-b)|\cr
&=|x_n-b||y_n-c|+|b||y_n-c|+|c||x_n-b|\cr
&\le\eta\cdot\eta+|b|\eta+|c|\eta\cr
&=\eta(\eta+|b|+|c|)\cr
&\le\eta(1+|b|+|c|)&\text{ because }\eta\le 1\cr
&\le\epsilon
&\text{ because }\eta\le\Bover{\epsilon}{1+|b|+|c|}.\cr}$$
So Player I wins and $\lim_{n\to\infty}x_ny_n=bc$.}
\medskip
{\bf More real functions: Definition} Suppose that $f$ is a real
function and that $a$, $b\in\Bbb R$. Then `$\lim_{x\to a}f(x)=b$'
means
\inset{for every $\epsilon>0$ there is a $\delta>0$ such that $f(x)$ is
defined and $|f(x)-b|\le\epsilon$ whenever $0<|x-a|\le\delta$.}
\noindent What is the idea here? The claim is that $f(x)\bumpeq b$
whenever $x\bumpeq a$ and $x\ne a$. (Remember that when we consider
$\lim_{x\to a}f(x)$, one value of $f$ which we {\it never} look at is
$f(a)$, even if it's defined.) As before, it's Player II who decides
what `$f(x)\bumpeq b$' means, by choosing $\epsilon>0$, and following
this Player I decides what `$x\bumpeq a$' means, by choosing a
$\delta>0$; finally Player II gets to choose $x$. As in the
definition of $\lim_{x\to\infty}$, we have to provide for the
possibility that $f$ is not defined everywhere, and I do this by
insisting that Player I must choose a $\delta$ such that $f(x)$ is
defined whenever $|x-a|\le\delta$ (and $x\ne a$); if she can't (for
instance, if $a=0$ and $f(x)=\sqrt{x}$ for $x\ge 0$, but undefined for
$x<0$), then Player I is bound to lose, and I don't allow myself to say
`$\lim_{x\to 0}\sqrt{x}=0$'. (Of course I do allow
`$\lim_{x\downarrow 0}\sqrt{x}=0$', but that's a different game.)
\medskip
{\bf Definition} If $f$ and $g$ are real functions, then $f\times g$ is
the real function defined by saying
\Centerline{$\dom(f\times g)=\dom f\cap\dom g$,
\quad$(f\times g)(x)=f(x)g(x)$ for $x\in\dom(f\times g)$.}
\noindent I give these definitions so that we can have a theorem about
the limit of a product of real functions, as follows.
\medskip
{\bf Theorem} If $f$ and $g$ are real functions, and
$\lim_{x\to a}f(x)=b$ and $\lim_{x\to a}g(x)=c$, then
$\lim_{x\to a}(f\times g)(x)=bc$.
\medskip
\noindent{\bf proof} This is a translation of the theorem on the product
of sequences, just as the theorem
$\lim_{x\to\infty}(f+g)(x)=\lim_{x\to\infty}f(x)+\lim_{x\to\infty}g(x)$
is a translation of the theorem on the sum of sequences. Let me write
it out directly.
Let $\epsilon>0$. Set $\eta=\min(1,\Bover{\epsilon}{1+|b|+|c|})>0$.
There is a $\delta_1>0$ such that $x\in\dom f$ and $|f(x)-b|\le\eta$
whenever $0<|x-a|\le\delta_1$.
There is a $\delta_2>0$ such that $x\in\dom g$ and $|g(x)-c|\le\eta$
whenever $0<|x-a|\le\delta_2$.
Set $\delta=\min(\delta_1,\delta_2)$. If $0<|x-a|\le\delta$, then
$0<|x-a|\le\delta_1$ and $0<|x-a|\le\delta_2$, so $x\in\dom f$ and
$x\in\dom g$ and $x\in\dom(f\times g)$; also $|f(x)-b|\le\eta$ and
$|g(x)-c|\le\eta$, so
$$\eqalignno{|f(x)g(x)-bc|&=|(f(x)-b)(g(x)-c)+b(g(x)-c)+c(f(x)-b)|\cr
&\le|(f(x)-b)(g(x)-c)|+|b(g(x)-c)|+|c(f(x)-b)|\cr
&=|f(x)-b||g(x)-c|+|b||g(x)-c|+|c||f(x)-b|\cr
&\le\eta^2+|b|\eta+|c|\eta
=\eta(\eta+|b|+|c|)
\le\eta(1+|b|+|c|)
\le\epsilon.\cr}$$
\noindent As this works for any $\epsilon>0$, Player I can win whatever
Player II does at his first move, and $\lim_{x\to a}(f\times g)(x)=bc$
is true.
\medskip
\noindent{\bf Remark} Look again at the translation. $\epsilon$ and
$\eta$ do exactly the same things as before. $n_1$ and $n_2$ turn into
$\delta_1$ and $\delta_2$, and $n_0$ turns into $\delta$. But observe
that $\delta$ is the {\it minimum} of $\delta_1$ and $\delta_2$, while
$n_0$ was the {\it maximum} of $n_1$ and $n_2$. This is because of the
rule change concerning the next move. In the case of sequences, Player
II has to choose $n\ge n_0$. So in order to make him choose an $n$
simultaneously greater than or equal to $n_1$ and $n_2$, Player II
chooses $n_0$ so that $n_0\ge n_1$ and $n_0\ge n_2$, and the easiest
such choice is $n_0=\max(n_1,n_2)$. But in the case of
$\lim_{x\to a}$, Player II has to choose $x$ such that $|x-a|\le\delta$,
and Player I wants to be sure that $|x-a|\le\delta_1$ and that
$|x-a|\le\delta_2$; so she takes $\delta=\min(\delta_1,\delta_2)$.
As a general rule, when choosing moves for Player I, you aim to make
things difficult for Player II, by reducing his choices as much as you
can. In this case, it's done by making $\delta$ close to $0$ (but
remembering that $\delta=0$ is cheating).
After this, the difference is mostly that every $x_n$ turns into $f(x)$
and every $y_n$ turns into $g(x)$; but, just as in the theorem on
$\lim_{x\to\infty}(f+g)(x)$, we have to preface every statement about
$f(x)$, $g(x)$ or $(f\times g)(x)$ with an explanation of why $x$
belongs to the domain of the function. In the lines
\inset{there is a $\delta_1>0$ such that $x\in\dom f$ and
$|f(x)-b|\le\eta$ whenever $0<|x-a|\le\delta_1$,
there is a $\delta_2>0$ such that $x\in\dom g$ and $|g(x)-c|\le\eta$
whenever $0<|x-a|\le\delta_2$}
\noindent this came from the definitions of $\lim_{x\to a}f(x)=b$,
$\lim_{x\to a}g(x)=c$; in the line
\inset{$x\in\dom f$ and $x\in\dom g$ and $x\in\dom(f\times g)$}
\noindent it came from the definition of $f\times g$.
\medskip
{\bf Three more definitions} As well as $\lim_{x\to\infty}$ and
$\lim_{x\to a}$, we sometimes want to look at
$\lim_{x\to-\infty}f(x)$, $\lim_{x\uparrow a}f(x)$ and
$\lim_{x\downarrow a}f(x)$. The definitions are as follows.
$\lim_{x\to-\infty}f(x)=b$ means
\inset{for every $\epsilon>0$ there is an $M\in\Bbb R$ such that
$x\in\dom f$ and $|f(x)-b|\le\epsilon$ whenever $x\le M$.}
\noindent (Note that the only difference between this and the definition
of `$\lim_{x\to\infty}f(x)=b$' is that we have `$x\le M$' instead of
`$x\ge M$'. But of course this makes a big difference to Player I's
tactics. When playing from the initial position
`$\lim_{x\to\infty}f(x)=b$', Player I will generally take an $M$ far,
far to the right, so that Player II will be seriously constrained by the
rule `$x\ge M$'. While if
playing from initial position `$\lim_{x\to\-infty}f(x)=b$', Player I
will generally take an $M$ correspondingly far to the left, so that
Player II will have to go to the edge of the universe to satisfy the
requirement `$x\le M$'.
\medskip
$\lim_{x\downarrow a}f(x)=b$ means
\inset{for every $\epsilon>0$ there is a $\delta>0$ such that
$x\in\dom f$ and $|f(x)-b|\le\epsilon$ whenever $aa$', or `$x$ is just to the right of $a$'.
\medskip
Finally, $\lim_{x\uparrow a}f(x)=b$ means
\inset{for every $\epsilon>0$ there is a $\delta>0$ such that
$x\in\dom f$ and $|f(x)-b|\le\epsilon$ whenever
$a-\delta\le x0$. Set
$\eta=\min(\Bover{|b|}2,\bover12b^2\epsilon)>0$. Then there is an
$n_0\in\Bbb N$ such that $|x_n-b|\le\eta$ for every $n\ge n_0$. If
$n\ge n_0$, then
$$\eqalignno{|x_n|
&\ge|b|-|x_n-b|\cr
\displaycause{because $|x_n|+|x_n-b|=|x_n|+|b-x_n|\ge|x_n+b-x_n|$}
&\ge|b|-\eta\ge|b|-\Bover12|b|=\Bover12|b|>0,\cr}$$
\noindent so that $x_n\ne 0$ and $\Bover1{x_n}$ is defined and
$$\eqalignno{|\Bover1{x_n}-\Bover1b|
&=|\Bover{b-x_n}{x_nb}|
=\Bover{|b-x_n|}{|x_n||b|}\cr
&\le\Bover{\eta}{|x_n||b|}\cr
\displaycause{because $|b-x_n|=|x_n-b|\le\eta$}
&\le\bover{\eta}{\bover12|b||b|}\cr
\displaycause{because $|x_n|\ge\Bover12|b|$}
&=\Bover{2\eta}{b^2}\le\Bover2{b^2}\cdot\bover12b^2\epsilon=\epsilon.
\cr}$$
\noindent As $\epsilon$ is arbitrary,
$\lim_{n\to\infty}\Bover1{x_n}=\Bover1b$.
\medskip
\noindent{\bf Remarks} Some of the ideas here are new, some are taken
from earlier proofs. The idea of taking $n_0$ such that $|x_n-
b|\le\eta$ for
$n\ge n_0$, where $\eta$ is some more or less complicated function of
$\epsilon$, is taken from the theorems on products of sequences or
functions.
This time we have only one sequence so we don't have the step
`$n_0=\max(n_1,n_2)$'. What we have instead is a more complicated
string of inequalities at the end. The steps
\Centerline{$|\Bover1{x_n}-\Bover1b|=\ldots=\Bover{|b-x_n|}{|x_n||b|}$}
\noindent are standard; nearly always, in these expressions, if we have
the modulus of a product or quotient we express it as the product or
quotient of the moduli and see what happens. Now in the expression
\Centerline{$\Bover{|b-x_n|}{|x_n||b|}$}
\noindent the $|b-x_n|$ on top is no problem at all; we know that by
the time we've reached this stage, we shall have $|b-x_n|\le\eta$, so we
just replace it by $\eta$. The new difficulty is in the $|x_n|$ on the
bottom. If we are to be sure that that $\Bover{\eta}{|x_n||b|}$ is
{\it less} than or equal to $\epsilon$, we shall need to know that
$|x_n|$ is {\it greater} than or equal to something. And that's where
the line
\Centerline{$|x_n|\ge|b|-|x_n-b|\ldots$}
\noindent comes in. Provided Player I takes $\eta<|b|$, she can be
sure that
$|x_n|\ge|b|-\eta$ so that $\Bover1{|x_n|}\le\Bover1{|b|-\eta}$ and
$\Bover{\eta}{|x_n||b|}\le\Bover{\eta}{(|b|-\eta)|b|}$, and by making
$\eta$ small enough she can ensure that this will be at most $\epsilon$.
The actual
formula
\Centerline{$\eta=\min(\Bover{|b|}2,\Bover12b^2\epsilon)$}
\noindent is just a trick for guaranteeing the three facts
\Centerline{$\eta>0$,\quad $\eta<|b|$,\quad
$\Bover{\eta}{(|b|-\eta)|b|}\le\epsilon$}
\noindent without solving any quadratic equations.
\medskip
{\bf Definition} Let $f$ be a real function. Then the real function
$\Bover1f$ is defined by saying
\Centerline{$\dom\Bover1f=\{x:x\in\dom f,\,f(x)\ne 0\}$,}
\Centerline{$\Bover1f(x)=\Bover1{f(x)}$ for every $x\in\dom\Bover1f$.}
\medskip
{\bf Theorem} Let $f$ be a real function, and suppose that
$\lim_{x\to-\infty}f(x)=b\ne 0$. Then
$\lim_{x\to-\infty}\Bover1f(x)=\Bover1b$.
\medskip
\noindent{\bf proof} Let $\epsilon>0$. Set
$\eta=\min(\Bover{|b|}2,\Bover12b^2\epsilon)>0$. Then there is an
$M\in\Bbb R$ such that $x\in\dom f$ and $|f(x)-b|\le\eta$ for every
$x\le M$. If $x\le M$, then $f(x)$ is defined and
$$\eqalignno{|f(x)|
&\ge|b|-|f(x)-b|\cr
&\ge|b|-\eta\ge\Bover12|b|>0,\cr}$$
\noindent so that $f(x)\ne 0$ and $x\in\dom\Bover1f$ is defined and
$$\eqalignno{|\Bover1{f(x)}-\Bover1b|
&=\Bover{|b-f(x)|}{|f(x)||b|}
\le\bover{\eta}{\bover12|b||b|}\cr
&=\Bover{2\eta}{b^2}\le\epsilon.\cr}$$
\noindent As $\epsilon$ is arbitrary,
$\lim_{x\to-\infty}\Bover1{f(x)}=\Bover1b$.
\bigskip
\noindent{\bf Continuous Functions: Definition} Let $f$ be a real
function. We say that $f$ {\bf is continuous at} $x_0$ if $x_0\in\dom
f$ and
\inset{for every $\epsilon>0$ there is a $\delta>0$ such that
\inset{$|f(x)-f(x_0)|\le\epsilon$ whenever $x\in\dom f$ and
$|x-x_0|\le\delta$.}}
\medskip
\noindent{\bf Remark} Note a very important rule change compared with
the formula $\lim_{x\to x_0}f(x)=f(x_0)$. If Player I says `$f$ is
continuous at $x_0$', then Player II chooses $\epsilon$, Player I
chooses $\delta$ and Player II chooses $x$, just as before. But this
time it is Player II's responsibility to ensure that $x\in\dom f$. For
`$\lim_{x\to x_0}f(x)=b$', Player II was allowed any $x$ such that
$0<|x-x_0|\le\delta$, and if he could find one outside the domain of $f$
he would win. But for `$f$ is continuous at $x_0$', Player II has to
pick $x\in\dom f$. He is now allowed to pick $x=x_0$, but of course
that does him no good at all (which is why it's allowed). So Player II
has a lot less freedom at his second move, and it's easier for Player I
to win.
The reason for this change is that it's useful to be able to say that
$\sqrt{}$ is continuous at $0$. But there is no way of making sense of
the formula
$\lim_{x\uparrow 0}\sqrt x$ (if we want to stick to real numbers), and
if we want to keep the rule
\Centerline{$\lim_{x\to a}f(x)=b$ iff
$\lim_{x\downarrow a}f(x)=\lim_{x\uparrow a}f(x)=b$,}
\noindent then we are going to have to abandon any attempt to interpret
the formula $\lim_{x\to 0}\sqrt x=0$, at least for the `real' function
$\sqrt{}$.
\medskip
{\bf Theorem} If $f$ and $g$ are continuous real functions, so are
$f+g$, $f\times g$ and $\Bover1f$.
\medskip
\noindent{\bf proof (a)} Take $x_0\in\dom(f+g)$ and $\epsilon>0$. Let
$\delta_1$, $\delta_2>0$ be such that
\Centerline{$|f(x)-f(x_0)|\le\Bover12\epsilon$ whenever $x\in\dom f$ and
$|x-x_0|\le\delta_1$,}
\Centerline{$|g(x)-g(x_0)|\le\Bover12\epsilon$ whenever $x\in\dom g$ and
$|x-x_0|\le\delta_2$.}
\noindent Set $\delta=\min(\delta_1,\delta_2)>0$. Then if
$x\in\dom(f+g)$ and $|x-x_0|\le\delta$,
\Centerline{$|(f+g)(x)-(f+g)(x_0)|= |f(x)+g(x)-f(x_0)-g(x_0)|
\le|f(x)-f(x_0)|+|g(x)-
g(x_0)|\le\Bover12\epsilon+\Bover12\epsilon=\epsilon$.}
\noindent As $x_0$ and $\epsilon$ are arbitrary, $f+g$ is continuous.
\medskip
{\bf (b)} Take $x_0\in\dom(f\times g)$ and $\epsilon>0$. Set
$\eta=\min(1,\Bover{\epsilon}{1+|f(x_0)|+|g(x_0)|})>0$. Let
$\delta_1$, $\delta_2>0$ be such that
\Centerline{$|f(x)-f(x_0)|\le\eta$ whenever $x\in\dom f$ and
$|x-x_0|\le\delta_1$,}
\Centerline{$|g(x)-g(x_0)|\le\eta$ whenever $x\in\dom g$ and
$|x-x_0|\le\delta_2$.}
\noindent Set $\delta=\min(\delta_1,\delta_2)>0$. Then if
$x\in\dom(f\times g)$ and $|x-x_0|\le\delta$,
$$\eqalign{|(f\times g)(x)-(f\times g)(x_0)|
&= |f(x)g(x)-f(x_0)g(x_0)|\cr
&\le|f(x)-f(x_0)||g(x)-g(x_0)|+|f(x_0)||g(x)-g(x_0)|+|f(x)-
f(x_0)||g(x_0)|\cr
&\le\eta(\eta+|f(x_0)|+|g(x_0)|
\le\eta(1+|f(x_0)|+|g(x_0)|
\le\epsilon.\cr}$$
\noindent As $x_0$ and $\epsilon$ are arbitrary, $f+g$ is continuous.
\medskip
{\bf (c)} Take $x_0\in\dom(\Bover1f)$ and $\epsilon>0$. Set
$\eta=\min(\Bover12|f(x_0)|,\Bover12\epsilon|f(x_0)|^2)>0$. Let
$\delta>0$ be such that
\Centerline{$|f(x)-f(x_0)|\le\eta$ whenever $x\in\dom f$ and
$|x-x_0|\le\delta$.}
\noindent Then if
$x\in\dom(\Bover1f)$ and $|x-x_0|\le\delta$,
\Centerline{$|f(x)|\ge|f(x_0)|-|f(x)-f(x_0)|\ge|f(x_0)|-\eta
\ge\Bover12|f(x_0)|$,}
\noindent so
$$\eqalign{|(\bover1f)(x)-(\bover1f)(x_0)|
&=\bigl|\bover1{f(x)}-\bover1{f(x_0)}\bigr|
=\bover{|f(x_0)-f(x)|}{|f(x)||f(x_0)|}\cr
&\le\bover{\eta}{\bover12|f(x_0)||f(x_0)|}
\le\epsilon.\cr}$$
\noindent As $x_0$ and $\epsilon$ are arbitrary, $\Bover1f$ is
continuous.
\medskip
{\bf Definition} If $f$ and $g$ are real functions, their {\bf
composition} $f\smallcirc g$ is defined by saying
\Centerline{$\dom(f\smallcirc g)=\{x:x\in\dom g,\,g(x)\in\dom f\}$,
\quad$(f\smallcirc g)(x)=f(g(x))$ for $x\in\dom(f\smallcirc g)$.}
\medskip
{\bf Theorem} If $f$ and $g$ are continuous real functions, then
$f\smallcirc g$ is continuous.
\medskip
\noindent{\bf proof} Take $x_0\in\dom(f\smallcirc g)$ and $\epsilon>0$.
Then $g(x_0)\in\dom f$ so there is an $\eta>0$ such that
$|f(y)-f(g(x_0))|\le\epsilon$ whenever $y\in\dom f$ and $|y-
g(x_0)|\le\eta$. Next, $x_0\in\dom g$ so there is a $\delta>0$ such
that $|g(x)-g(x_0)|\le\eta$ whenever $x\in\dom g$ and $|x-
x_0|\le\delta$.
If $x\in\dom(f\smallcirc g)$ and $|x-x_0|\le\delta$, then $g(x)\in\dom
f$ and $|g(x)-g(x_0)|\le\eta$, so
\Centerline{$|(f\smallcirc g)(x)-(f\smallcirc g)(x_0)|
=|f(g(x))-f(g(x_0))|\le\epsilon$.}
\noindent As $x_0$ and $\epsilon$ are arbitrary, $f\smallcirc g$ is
continuous.
\medskip
{\bf Examples (a)} Constant functions are continuous. \Prf\ Suppose
that $f(x)=c$ for $x\in\dom f$. Take $x_0\in\dom f$ and $\epsilon>0$.
Set $\delta=1$. Then if $x\in\dom f$ and $|x-x_0|\le\delta$,
\Centerline{$|f(x)-f(x_0)|=|c-c|=0\le\epsilon$.}
\noindent As $x_0$ and $\epsilon$ are arbitrary, $f$ is continuous.\
\Qed
\noindent{\bf Remark} Observe that in this (quite exceptional) case,
Player I can announce her second move $\delta=1$ {\it before} Player II
has played his first move; her position is so strong that he has no way
of using the advance knowledge.
\medskip
\quad{\bf (b)} Identity functions are continuous. \Prf\ Suppose that
$f(x)=x$ for $x\in\dom f$.
Take $x_0\in\dom f$ and $\epsilon>0$. Set $\delta=\epsilon$. Then
if $x\in\dom f$ and $|x-x_0|\le\delta$,
\Centerline{$|f(x)-f(x_0)|=|x-x_0|\le\delta=\epsilon$.}
\noindent As $x_0$ and $\epsilon$ are arbitrary, $f$ is continuous.\
\Qed
\noindent{\bf Remark} In {\it this} case, Player I has to know part of
Player II's move; but if she doesn't pay attention properly and misses
the announcement of $x_0$, she can still win by playing
$\delta=\epsilon$, and ask what $x_0$ is afterwards, when they come to
check the calculation of $f(x)-f(x_0)$.
\medskip
\quad{\bf (c)} The function $x\mapsto|x|:\Bbb R\to\Bbb R$ is continuous.
\Prf\ Take $x_0\in\Bbb R$ and $\epsilon>0$. Set $\delta=\epsilon$.
Then if $x\in\dom f$ and $|x-x_0|\le\delta$,
\Centerline{$||x|-|x_0||\le|x-x_0|\le\delta=\epsilon$.}
\noindent As $x_0$ and $\epsilon$ are arbitrary, $|\,\,|$ is
continuous.\ \Qed
\medskip
{\bf Corollary} For any continuous real function $f$,
$x\mapsto|f(x)|:\dom f\to\Bbb R$ is continuous. \Prf\ This is just the
composition $|\,\|\smallcirc f$ of two continuous functions.\ \Qed
\medskip
{\bf Remark} We now know that $f+g$, $f\times g$, $\Bover1f$ and
$f\smallcirc g$ are continuous whenever $f$ and $g$ are, and that
$x\mapsto c$, $x\mapsto x$ and $x\mapsto|x|$ are continuous functions
from $\Bbb R$ to itself for any $c\in\Bbb R$. Putting these together,
we see (for instance) that
\inset{$x\mapsto x^2:\Bbb R\to\Bbb R$ is continuous}
\noindent (because this is just $f\times f$, where $f(x)=x$ for every
$x\in\Bbb R$),
\inset{$x\mapsto 2x^2:\Bbb R\to\Bbb R$ is continuous}
\noindent (being the product of the continuous functions $x\mapsto x^2$
and $x\mapsto 2$),
\inset{$x\mapsto 2x^3:\Bbb R\to\Bbb R$ is continuous}
\noindent (being the product of the continuous functions $x\mapsto 2x^2$
and $x\mapsto x$),
\inset{$x\mapsto\Bover1{x^2}:\Bbb R\setminus\{0\}\to\Bbb R$ is
continuous}
\noindent (being the reciprocal of a continuous function),
\inset{$x\mapsto x+\Bover1{x^2}:\Bbb R\setminus\{0\}\to\Bbb R$ is
continuous}
\noindent (being the sum of two continuous functions),
\inset{$x\mapsto 2(x+\Bover1{x^2})^3:\Bbb R\setminus\{0\}\to\Bbb R$ is
continuous}
\noindent (being the composition of the functions $x\mapsto 2x^3$ and
$x\mapsto x+\Bover1{x^2}$). Generally, most of the functions we have
names for are continuous; and we can prove that a function with a
formula like
\Centerline{$h(x)=\Bover{\sin(3x+1)}{1-\ln\cos x}$}
\noindent is continuous as soon as we know that $\sin$, $\cos$, $\ln$
are continuous (which I am afraid will be just outside the scope of this
course).
\bigskip
\noindent{\bf Dedekind completeness} I come now to the final basic
property of the real number system which I left out of the initial list
of properties of addition, multiplication and the order relation. For
this we need some terminology. If $A\subseteq\Bbb R$ is any set, an
{\bf upper bound} of $A$ is an $x\in\Bbb R$ such that $a\le x$ for every
$a\in A$, and a {\bf lower bound} of $A$ is an $x\in\Bbb R$ such that
$x\le a$ for every $a\in A$.
\medskip
\inset{{\bf Examples} If $A=[0,1]$ then $2$, $\pi$, $1$ are upper bounds
of $A$ and $-1$, $0$ are lower bounds.
If $A=\Bbb N$ then $-1$, $-\bover14$, $0$ are lower bounds of $A$, but
$A$ has no upper bounds.
If $A=\Bbb Z$ then $A$ has no upper bounds and no lower bounds.
If $A=\ooint{0,1}$ then $1$ is an upper bound of $A$ but
$0{\cdot}999999$ is not, because $0{\cdot}9999991\in A$.}
If $A=\emptyset$ then every real number is both an upper bound and a
lower bound for $A$.
\medskip
If $A$ has a least upper bound, I will call it the {\bf supremum} of
$A$, $\sup A$; if it has a greatest lower bound, I call it the {\bf
infimum} of $A$, $\inf A$.
\medskip
\inset{{\bf Examples} If $A=[0,1]$ then $\sup A=1$ and $\inf A=0$.
If $A=\Bbb N$ then $\inf A=0$ but $A$ has no supremum (because it has no
upper bounds at all).
If $A=\Bbb Z$ then $A$ has no supremum and no infimum.
If $A=\ooint{0,1}$ then $\sup A=1$ and $\inf A=0$.
If $A=\emptyset$ then $A$ has no supremum and no infimum.}
\medskip
Note that $\ooint{0,1}$ and $[0,1]$ have exactly the same upper bounds;
$x\in\Bbb R$ is an upper bound for either iff $x\ge 1$, and $1$ is the
least upper bound of both. This shows that $\sup A$ (when defined) may
or may not belong to the set $A$.
\medskip
{\bf FUNDAMENTAL FACT} If $A\subseteq\Bbb R$ is non-empty and has an
upper bound, it has a least upper bound.
Similarly, if $A\subseteq\Bbb R$ is non-empty and has a lower bound, it
has a greatest lower bound.
\medskip
\noindent{\bf Remarks} What these principles are saying is that a subset
of $\Bbb R$ has a supremum and an infimum unless it plainly can't,
either because it doesn't have any bounds on the appropriate side, or
because it is empty and has altogether too many upper and lower bounds.
The idea goes back to classical times; in a geometric form it was
proposed by Eudoxus. It re-surfaced in the nineteenth century as part
of the general programme of putting calculus on a sound logical footing,
and is now generally called `Dedekind's axiom' or `the principle of
Dedekind completeness of $\Bbb R$'.
\bigskip
\noindent{\bf Convergent sequences} A large part of the rest of the
course will involve Dedekind completeness in one way or another. For
the first application I will give two of the most important theorems on
convergence of sequences. We need some definitions.
\medskip
{\bf Definitions} A subset $A$ of $\Bbb R$ is {\bf bounded} if it has
both upper and lower bounds. A real sequence $\sequencen{x_n}$ is {\bf
bounded} if $\{x_n:n\in\Bbb N\}$ is bounded, that is, there are $a$,
$b\in\Bbb R$ such that $a\le x_n\le b$ for every $n\in\Bbb N$.
A real sequence $\sequencen{x_n}$ is {\bf non-decreasing} if
$x_n\le x_{n+1}$ for every $n\in\Bbb N$, and {\bf non-increasing} if
$x_{n+1}\le x_n$ for every $n$. Finally, a sequence is {\bf monotonic}
if it is either non-decreasing or non-increasing (or both).
\medskip
{\bf Examples} (i) If $x_n=2^n$ for $n\in\Bbb N$, then $\sequencen{x_n}$
is unbounded, monotonic (non-decreasing), not convergent.
(ii) If $x_n=\Bover1n$ for $n\ge 1$, then $\langle x_n\rangle_{n\ge 1}$
is bounded, monotonic (non-increasing), convergent (to $0$).
(iii) If $x_n=(-1)^n$ for $n\in\Bbb N$, then $\sequencen{x_n}$ is
bounded, not monotonic, not convergent.
(iv) If $x_n=(-2)^n$ for $n\in\Bbb N$, then $\sequencen{x_n}$ is
unbounded, not monotonic, not convergent.
(v) If $x_n=3$ for $n\in\Bbb N$, then $\sequencen{x_n}$ is bounded,
monotonic (simultaneously non-decreasing and non-increasing), convergent
(to $3$).
(vi) If $x_n=\Bover{(-1)^n}{n}$ for $n\ge 1$, then
$\langle x_n\rangle_{n\ge 1}$ is bounded, not monotonic, convergent (to
$0$).
\medskip
{\bf Theorem} A bounded monotonic sequence is convergent.
\medskip
\noindent{\bf proof} Let $\sequencen{x_n}$ be a bounded monotonic
sequence.
\medskip
\quad{\bf case 1} Suppose that $\sequencen{x_n}$ is non-decreasing.
Set $A=\{x_n:n\in\Bbb N\}$. Because $\sequencen{x_n}$ is bounded, $A$
has an upper bound; because $x_0\in A$, $A$ is not empty; so $\sup A$
is defined in $\Bbb R$; call it $b$.
Let $\epsilon>0$. Then $b-\epsilon**b-\epsilon$.
If $n\ge n_0$, then
$$\eqalignno{b-\epsilon&\le x_{n_0}\le x_n\cr
\displaycause{because $\sequence{i}{x_i}$ is non-decreasing and
$n_0\le n$}
\le b\cr
\displaycause{because $x_n\in A$ and $b$ is an upper bound of $A$}
\le b+\epsilon.\cr}$$
\noindent So $x_n\in[b-\epsilon,b+\epsilon]$ and $|x_n-b|\le\epsilon$.
As $\epsilon$ is arbitrary, $\lim_{n\to\infty}x_n=b$ and
$\sequencen{x_n}$ is convergent.
\medskip
\quad{\bf case 2} Suppose that $\sequencen{x_n}$ is non-increasing.
\medskip
\noindent{\bf first method} Set $A=\{x_n:n\in\Bbb N\}$. Because
$\sequencen{x_n}$ is bounded, $A$ has a lower bound; because
$x_0\in A$, $A$ is not empty; so $\inf A$ is defined in $\Bbb R$;
call it $b$.
Let $\epsilon>0$. Then $b+\epsilon>b$, so $b+\epsilon$ is not a lower
bound of $A$; let $n_0\in\Bbb N$ be such that $x_{n_0}****0$
Player I chooses $n_0\in\Bbb N$
Player II chooses $n\ge n_0$
and they look to see whether $|x_n-b|\le\epsilon$, or not.}
\noindent The proof of the theorem consists, as usual, of a description
of a strategy for Player I. Because there are two separate moves (the
choice of $b$ and the choice of $n_0$) to manage, we either have to be
very good at looking ahead or remember at least one of them. The rule
for $b$ is
$$\eqalign{b
&=\sup\{x_n:n\in\Bbb N\}\text{ if }\sequencen{x_n}\text{ is
non-decreasing},\cr
&=\inf\{x_n:n\in\Bbb N\}\text{ if }\sequencen{x_n}\text{ is
non-increasing}.\cr}$$
\noindent The rule for $n_0$ is
$$\eqalign{\text{choose }n_0\text{ such that }x_{n_0}
\ge b-\epsilon\text{ if }\sequencen{x_n}\text{ is
non-decreasing},\cr
\le b+\epsilon\text{ if }\sequencen{x_n}\text{ is
non-increasing}.\cr}$$
\noindent Most of the proof amounts, in fact, to making sure that these
rules can be applied; if we are going to set
$b=\sup\{x_n:n\in\Bbb N\}$, we must first check that
$\{x_n:n\in\Bbb N\}$ is non-empty and bounded above.
Note that the theorem depends on two hypotheses: the sequence must be
simultaneously bounded and monotonic; if it's not bounded (like
$x_n=2^n$), or not monotonic (like $x_n=(-1)^n$) it may fail to be
convergent. In the proof, therefore, we must use both these facts.
The assumption that $\sequencen{x_n}$ is bounded is used when Player I
chooses $b$, and the assumption that it is monotonic is used at the
checking stage, to see that $x_n$ is on the correct side of $x_{n_0}$
and is therefore at least as close to $b$ as $x_{n_0}$ is.
There is a similar exact economy in our use of the properties of $b$.
Looking at case 1 in the proof above, in which $b=\sup A$, we use the
fact that $b$ is an upper bound of $A$ at the end, where we say that
$x_n\le b\le b+\epsilon$; we have already used the fact that $A$ has no
upper bound less than $b$, when we said that $b-\epsilon$ is not an
upper bound of $A$, so there is an $n_0\in\Bbb N$ such that
$x_{n_0}>b-\epsilon$. When you are writing out a proof of this kind,
you should look for these things; they are a check that you aren't
leaving anything out.
\medskip
{\bf Cauchy sequences: Definition} A real sequence $\sequencen{x_n}$ is
a {\bf Cauchy sequence} if for every $\epsilon>0$ there is an
$n_0\in\Bbb N$ such that $|x_m-x_n|\le\epsilon$ for all $m$, $n\ge n_0$.
\medskip
{\bf Proposition} Every Cauchy sequence is bounded.
\medskip
\noindent{\bf proof} Let $\sequencen{x_n}$ be a Cauchy sequence. Then
there is an $n_0\in\Bbb N$ such that $|x_m-x_n|\le 1$ for every $m$,
$n\ge n_0$; in particular, $|x_n-x_{n_0}|\le 1$ for every $n\ge n_0$.
Set
\Centerline{$M=\max(|x_0|,|x_1|,\ldots,|x_{n_0}|,|x_{n_0}|+1)$;}
\noindent then $M$ is the maximum of a finite string of real numbers, so
is finite. If $n\le n_0$, then $|x_n|\le M$ because $|x_n|$ is listed
in the string; if $n\ge n_0$, then
\Centerline{$|x_n|\le|x_{n_0}|+|x_n-x_{n_0}|\le|x_{n_0}|+1\le M$.}
\noindent So $|x_n|\le M$, that is, $-M\le x_n\le M$, for every
$n\in\Bbb N$, and $\{x_n:n\in\Bbb N\}$ is bounded.
\medskip
{\bf Theorem} Every Cauchy sequence is convergent.
\medskip
\noindent{\bf proof} Let $\sequencen{x_n}$ be a Cauchy sequence. Then
it is bounded, by the proposition just above; say $a$, $b\in\Bbb R$ are
such that $a\le x_n\le b$ for every $n\in\Bbb N$. For each $n\in\Bbb
N$ set $A_n=\{x_i:i\ge n\}$. Then $a$ is a lower bound for $A_n$ and
$x_n\in A_n$, so $A_n$ is a non-empty set with a lower bound and has an
infimum $y_n=\inf A_n$, with $a\le y_n\le x_n$.
Next, $A_{n+1}\subseteq A_n$ (in fact, $A_n=A_{n+1}\cup\{x_n\}$), so
$y_n$ is also a lower bound for $A_{n+1}$, and $y_n\le y_{n+1}$.
This is true for every $n\in\Bbb N$, so $\sequencen{y_n}$ is a
non-decreasing sequence. Moreover, $a\le y_n\le x_n\le b$ for every
$n$, so $\sequencen{y_n}$ is bounded. It is therefore convergent, by
the last theorem; let $c$ be its limit.
Now $\lim_{n\to\infty}x_n-y_n=0$. \Prf\ Let $\epsilon>0$. Then there
is an $n_0\in\Bbb N$ such that $|x_i-x_n|\le\epsilon$ whenever $n$,
$i\ge n_0$. Let $n\ge n_0$. Then, for any $i\ge n$,
$|x_i-x_n|\le\epsilon$, so $x_i\ge x_n-\epsilon$. This means that
$x_n-\epsilon$ is a lower bound for $A_n$; since $y_n$ is the greatest
lower bound, $x_n-\epsilon\le y_n$. But we already know that
$y_n\le x_n$, so $|x_n-y_n|\le\epsilon$, and this is true for every
$n\ge n_0$. As $\epsilon$ is arbitrary, $\lim_{n\to\infty}x_n-y_n=0$.\
\Qed
Since $\lim_{n\to\infty}x_n-y_n=0$ and $\lim_{n\to\infty}y_n=c$,
\Centerline{$\lim_{n\to\infty}x_n
=\lim_{n\to\infty}x_n-y_n+y_n=0+c=c$.}
\noindent Thus $\sequencen{x_n}$ is convergent; as $\sequencen{x_n}$
was arbitrary, the theorem is proved.
\medskip
{\bf *Remark} Observe that we have a formula for the limit of
$\sequencen{x_n}$: it is
$$\eqalignno{c&=\lim_{n\to\infty}y_n=\sup\{y_n:n\in\Bbb N\}\cr
\displaycause{see the proof of the theorem that bounded monotonic
sequences are convergent}
&=\sup\{\inf\{x_i:i\ge n\}:n\in\Bbb N\}.\cr}$$
\noindent This last formula makes sense for any {\it bounded} sequence
$\sequencen{x_n}$ (note that in the proof above we used the fact that
$\sequencen{x_n}$ is bounded right at the beginning, but didn't need to
know any more until we came to look at $\lim_{n\to\infty}x_n-y_n$); and
it has a name; it's called `$\liminf_{n\to\infty}x_n$'.
\medskip
{\bf Theorem} Every convergent sequence is Cauchy.
\medskip
\noindent{\bf proof} Let $\sequencen{x_n}$ be a convergent sequence, with limit $b$. Let $\epsilon>0$. Then there is an $n_0\in\Bbb N$ such that
$|x_n-b|\le\Bover{\epsilon}2$ for every $n\ge n_0$. If $m$, $n\ge n_0$,
\Centerline{$|x_m-x_n|\le|x_m-b|+|x_n-b|
\le\Bover{\epsilon}2+\Bover{\epsilon}2=\epsilon$.}
\noindent As $\epsilon$ is arbitrary, $\sequencen{x_n}$ is Cauchy.
\medskip
{\bf Cauchy's General Principle of Convergence} Putting the last two theorems together, we have the following fundamental principle:
\inset{\inset{A real sequence is convergent iff it is Cauchy.}}
\noindent This is one of the most basic properties of the real numbers.
\bigskip
\noindent{Summation: Definitions}
A `series' is a sequence we mean to try to add up.
If $\sequence{k}{x_k}$ is a series, its {\bf sequence of partial sums} is the sequence $\sequencen{s_n}$ defined by
\Centerline{$s_n=\sum_{k=0}^nx_k$ for every $n\in\Bbb N$.}
\noindent The {\bf sum} of the series is
\Centerline{$\sum_{k=0}^{\infty}x_k=\lim_{n\to\infty}s_n
=\lim_{n\to\infty}\sum_{k=0}^nx_k$}
\noindent if this is defined.
\inset{\inset{The sum of the series is the limit of the sequence of partial sums.}}
\noindent A series $\sequence{k}{x_k}$ is {\bf summable} if its sequence of partial sums is convergent, that is, $\sum_{k=0}^{\infty}x_k$ is defined as a real number (not allowing $\pm\infty$).
A series $\sequence{k}{x_k}$ is {\bf absolutely summable} if the series $\sequence{k}{|x_k|}$ of its absolute values is summable, that is,
$\sum_{k=0}^{\infty}|x_k|=\lim_{n\to\infty}\sum_{k=0}^n|x_k|$ is defined as a real number (not allowing $\infty$).
\medskip
{\bf Remark} Some of our favourite series don't begin at $x_0$; e.g., $x_k=\Bover1k$. For such a series $\langle x_k\rangle_{k\ge 1}$, the sequence of partial sums has to start at the same point, and we have
$\langle s_n\rangle_{n\ge 1}$, where $s_n=\sum_{k=1}^nx_k$, and
\Centerline{$\sum_{k=1}^{\infty}x_k=\lim_{n\to\infty}\sum_{k=1}^nx_k$}
\noindent if this is defined.
\medskip
{\bf Theorem} An absolutely summable series is summable.
\medskip
\noindent{\bf proof} Let $\sequence{k}{x_k}$ be an absolutely summable series.
For each $n\in\Bbb N$, set
\Centerline{$s_n=\sum_{k=0}^nx_k$,
\quad$t_n=\sum_{k=0}^n|x_k|$.}
\noindent Then $|s_m-s_n|\le|t_m-t_n|$ for all $m$, $n\in\Bbb N$. \Prf\ (i) If $n0$, there is an $n_0\in\Bbb N$ such that $|t_m-t_n|\le\epsilon$ for all $m$, $n\ge n_0$. So
$|s_m-s_n|\le|t_m-t_n|\le\epsilon$ for all $m$, $n\ge n_0$. As $\epsilon$ is arbitrary, $\sequencen{s_n}$ is Cauchy.\ \Qed
But now remember that all Cauchy sequences are convergent. So $\sequencen{s_n}$ is convergent, that is, $\sequence{k}{x_k}$ is summable.
\medskip
The same ideas can be used to prove the Comparison Test. I give a simple form.
\medskip
{\bf Theorem} Suppose that $0\le x_k\le y_k$ for every $k\in\Bbb N$ and that $\sequence{k}{y_k}$ is summable. Then $\sequence{k}{x_k}$ is summable.
\medskip
\noindent{\bf proof} For each $n\in\Bbb N$, set
\Centerline{$s_n=\sum_{k=0}^nx_k$,
\quad$t_n=\sum_{k=0}^ny_k$.}
\noindent Then $|s_m-s_n|\le|t_m-t_n|$ for all $m$, $n\in\Bbb N$. \Prf\ (i) If $n0$ such that $|f(x)-f(c)|\le 1$ whenever $x\in[c-\delta,c+\delta]\cap\dom f$. Set $z=\min(b,c+\delta)$. Then $a\le c\le z\le b$ so $z\in[a,b]$.
Now $c-\deltac_1$ for every $x\in[a,b]$. Set $g(x)=\Bover1{f(x)-c_1}$ whenever this is defined, that is, whenever $x\in\dom f$ and $f(x)\ne c_1$. Then $g$ is continuous at any point where $f$ is continuous and $f(x)\ne c_1$ (because the function $y\mapsto\Bover1{y-c_1}$ is continuous); in particular, $g$ is continuous at every point of $[a,b]$. There is therefore some $K$ such that $g(x)\le K$ for every $x\in[a,b]$, because continuous functions on closed bounded intervals are bounded. But this means that
$f(x)-c_1\ge\Bover1K$ for every $x\in[a,b]$, that is,
$f(x)\ge c_1+\Bover1K$ for every $x\in[a,b]$, that is, $c_1+\Bover1K$ is a lower bound for $B$, and $c_1$ is not the greatest lower bound of $B$.\ \Bang
So we have to conclude that there is some $z_1\in[a,b]$ such that $f(z_1)=c_1$, and now we have $f(z_1)\le f(x)$ for every $x\in[a,b]$.
I have still to find $z_2$. To do this, {\it either} repeat the argument just above, upside down, as follows:
\inset{\Quer\ Suppose, if possible, that $f(x)\ne c_2$ for every $x\in[a,b]$, that is, that $c_2\notin B$. Then $f(x)0$, so there is a $\delta>0$ such that
$|f(x)-f(c)|\le c-f(z)$ whenever $x\in\dom f$ and $|x-z|\le\delta$. Consider $x=\min(b,z+\delta)$. Then $a\le z\le x\le b$ so $f(x)$ is defined, and $z\le x\le z+\delta$ so $f(x)\le f(z)+(c-f(z))=c$. But this means that $x\in A$. On the other hand, since $f(z)c$, then $\Bover12(f(z)-c)>0$, so there is a $\delta>0$ such that $| f(x)-f(c)|\le \bover12(f(z)-c)$ whenever $x\in\dom f$ and
$|x-z|\le\delta$. Now there must be an $x\in A$ such that
$z-\delta\le x\le z$, in which case $f(x)\le c$ and $|x-z|\le\delta$, so
\Centerline{$f(z)-c\le f(z)-f(x)\le|f(x)-f(z)|\le\Bover12(f(z)-c)$,}
\noindent which is impossible.\ \Bang
We are forced to conclude that $f(z)=c$; which is what we were looking for.
\medskip
{\bf (b)} We still have to deal with the case in which
$f(b)\le c\le f(a)$. Just as in the last theorem, we have a choice: {\it either} repeat the argument above with half the signs exchanged, as follows:
\inset{Set
\Centerline{$A=\{x:x\in[a,b],\,f(x)\ge c\}$.}
\noindent Then $A$ is bounded above by $b$, and $a\in A$, so $z=\sup A$ is defined and $a\le z\le b$. As $z\in[a,b]$, $f$ is continuous at $z$.
\Quer\ If $f(z)>c$, then $f(z)-c>0$, so there is a $\delta>0$ such that
$|f(x)-f(c)|\le f(z)-c$ whenever $x\in\dom f$ and $|x-z|\le\delta$. Consider $x=\min(b,z+\delta)$. Then $a\le z\le x\le b$ so $f(x)$ is defined, and $z\le x\le z+\delta$ so $f(x)\ge f(z)-(f(z)-c)=c$. But this means that $x\in A$. On the other hand, since $f(z)>c\ge f(b)$, $z$ cannot be equal to $b$, so $z****0$, so there is a $\delta>0$ such that $|f(x)-f(c)|\le\bover12(c-f(z))$ whenever $x\in\dom f$ and
$|x-z|\le\delta$. Now there must be an $x\in A$ such that
$z-\delta\le x\le z$, in which case $f(x)\ge c$ and $|x-z|\le\delta$, so
\Centerline{$c-f(z)\le f(x)-f(z)\le|f(x)-f(z)|\le\Bover12(c-f(z))$,}
\noindent which is impossible.\ \Bang
We are forced to conclude that $f(z)=c$}
\noindent {\it or} define a new function $h$ by saying that
\Centerline{$h(x)=-f(x)$ for every $x\in\dom f$,}
\noindent so that $h$ is defined and continuous wherever $f$ is, in particular, at every point of $[a,b]$. Now
\Centerline{$h(a)=-f(a)\le -c\le -f(b)=h(b)$,}
\noindent so by part (a) we know that there is a $z\in[a,b]$ such that
$h(z)=-c$, that is, $f(z)=c$.
So the theorem is true in this case also.
\bigskip
\noindent{\bf Differentiable Functions}
\medskip
{\bf Definition} Let $f$ be a real function. We say that $f$ is {\bf differentiable} at $a\in\Bbb R$, with {\bf derivative} $b=f'(a)$, if
$a\in\dom f$ and $\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}=b$.
\medskip
\noindent{\bf Remark} Note that if the limit is to be defined, then there must be some $\delta>0$ such that $f(x)$ is defined whenever $0<|x-a|\le\delta$; since we must also be able to calculate $f(a)$, we see that the whole interval $[x-\delta,x+\delta]$ must be included in $\dom f$.
\medskip
{\bf Lemma} If $f$ is a real function, $a\in\dom f$ and
$\lim_{x\to a}f(x)=f(a)$, then $f$ is continuous at $a$.
\medskip
\noindent{\bf proof} For every $\epsilon>0$ there is a $\delta>0$ such that $x\in\dom f$ and $|f(x)-f(a)|\le\epsilon$ whenever $0<|x-a|\le\delta$. Of course $|f(a)-f(a)|\le\epsilon$, so we see that if $|x-a|\le\delta$ and $x\in\dom f$ then $|f(x)-f(a)|\le\epsilon$. As $\epsilon$ is arbitrary, $f$ is continuous at $a$.
\medskip
{\bf Theorem} If a real function $f$ is differentiable at $a\in\Bbb R$, then $f$ is continuous at $a$.
\medskip
\noindent{\bf proof} Of course $a\in\dom f$. I seek to show that $f(a)=\lim_{x\to a}f(x)$. We know that
$\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}=f'(a)$ and that $\lim_{x\to a}x-a=0$. Since the limit of a product is the product of the limits whenever the latter is defined, we have
$$\eqalign{\lim_{x\to a}f(x)-f(a)
&=\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}\cdot(x-a)\cr
&=\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}\cdot\lim_{x\to a}x-a
=f'(a)\cdot 0=0.\cr}$$
\noindent Next, the limit of a sum is the sum of the limits whenever the latter is defined, so
$$\eqalign{\lim_{x\to a}f(x)
&=\lim_{x\to a}f(x)-f(a)+f(a)\cr
&=\lim_{x\to a}f(x)-f(a)+\lim_{x\to a}f(a)
=0+f(a)=f(a).\cr}$$
\noindent By the last lemma, $f$ is continuous at $a$.
\medskip
{\bf Proposition} Let $f$ and $g$ be real functions, both differentiable at $a\in\Bbb R$. Then $f+g$ is differentiable at $a$ and $(f+g)'(a)=f'(a)+g'(a)$.
\medskip
\noindent{\bf proof} We know that
\Centerline{$\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}=f'(a)$,
\quad$\lim_{x\to a}\Bover{g(x)-g(a)}{x-a}=g'(a)$.}
\noindent Now the limit of a sum is the sum of the limits whenever the latter is defined, so
$$\eqalign{\lim_{x\to a}\Bover{(f+g)(x)-(f+g)(a)}{x-a}
&=\lim_{x\to a}\Bover{f(x)+g(x)-f(a)-g(a)}{x-a}\cr
&=\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}+\Bover{g(x)-g(a)}{x-a}\cr
&=\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}
+\lim_{x\to a}\Bover{g(x)-g(a)}{x-a}
=f'(a)+g'(a),\cr}$$
\noindent as required.
\medskip
{\bf Proposition} Let $f$ and $g$ be real functions, both differentiable at $a\in\Bbb R$. Then $f\times g$ is differentiable at $a$ and $(f\times g)'(a)=f'(a)g(a)+f(a)g'(a)$.
\medskip
\noindent{\bf proof} We know that
\Centerline{$\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}=f'(a)$,
$\lim_{x\to a}g(a)=g(a)$;}
\noindent because the limit of a product is the product of the limits when the latter exists,
\Centerline{$\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}g(a)=f'(a)g(a)$.}
\noindent Similarly,
\Centerline{$\lim_{x\to a}\Bover{g(x)-g(a)}{x-a}=g'(a)$,
$\lim_{x\to a}f(a)=f(a)$,}
\noindent so
$\lim_{x\to a}f(a)\Bover{g(x)-g(a)}{x-a}=f(a)g'(a)$. Moreover,
$\lim_{x\to a}x-a=0$, so
\Centerline{$\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}
\cdot\Bover{g(x)-g(a)}{x-a}\cdot(x-a)=f'(a)g'(a)\cdot 0=0$.}
\noindent Now we know also that the limit of a sum is the sum of the limits when the latter exists, so
$$\eqalign{\lim_{x\to a}\Bover{(f\times g)(x)-(f\times g)(a)}{x-a}
&=\lim_{x\to a}\Bover{f(x)g(x)-f(a)g(a)}{x-a}\cr
&=\lim_{x\to a}\Bover{(f(x)-f(a))(g(x)-g(a))+(f(x)-f(a))g(a)
+f(a)(g(x)-g(a))}{x-a}\cr
&=\lim_{x\to a}\Bover{(f(x)-f(a))(g(x)-g(a))}{x-a}
+\Bover{f(x)-f(a)}{x-a}g(a)
+f(a)\Bover{g(x)-g(a)}{x-a}\cr
&=\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}\Bover{g(x)-g(a)}{x-a}(x-a)
+\Bover{f(x)-f(a)}{x-a}g(a)
+f(a)\Bover{g(x)-g(a)}{x-a}\cr
&=0+f'(a)g(a)+f(a)g'(a)
=f'(a)g(a)+f(a)g'(a).\cr}$$
\noindent But this is just what it means to say that $(f\times g)'(a)$ is defined and equal to $f'(a)g(a)+f(a)g'(a)$.
\medskip
{\bf Three basic functions} (a) If $f(x)=c$ for $x$ near $a$, then
\Centerline{$f'(a)=\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}
=\lim_{x\to a}\Bover{c-c}{x-a}
=\lim_{x\to a}0=0$.}
\noindent (Constant functions are differentiable, with derivative zero.)
\medskip
(b) If $f(x)=x$ for $x$ near $a$, then
\Centerline{$f'(a)=\lim_{x\to a}\Bover{f(x)-f(a)}{x-a}
=\lim_{x\to a}\Bover{x-a}{x-a}
=\lim_{x\to a}1=1$.}
\noindent (The identity function is differentiable, with derivative 1.)
\medskip
(c) If $f(x)=\Bover1x$ for $x$ near $a$, then
$$\eqalignno{f'(a)
&=\lim_{x\to a}\bover{f(x)-f(a)}{x-a}
=\lim_{x\to a}\bover{\bover1x-\bover1a}{x-a}\cr
&=\lim_{x\to a}-\bover1{xa}
=-\bover1a\lim_{x\to a}1x
=-\bover1a\cdot\bover1a\cr
\displaycause{because the function $x\mapsto\Bover1x$ is continuous and is defined near $a$}
&=-\bover1{a^2}.\cr}$$
\noindent (The reciprocal function is differentiable.)
\medskip
{\bf Derivatives without division} It is very useful to know the following fact.
\medskip
\noindent{\bf Lemma} Let $f$ be a real function, and $a\in\dom f$. Then $f'(a)$ is defined and equal to $b$ iff
\inset{for every $\epsilon>0$ there is a $\delta>0$ such that $|f(x)-f(a)-b(x-a)|$ is defined and less than or equal to $\epsilon|x-a|$ whenever $|x-a|\le\delta$.}
\medskip
\noindent{\bf proof} We have
$$\eqalignno{f'(a)=b
&\iff\Forall\epsilon>0\Exists\delta>0,\,|\Bover{f(x)-f(a)}{x-a}-b|\text{ exists }\le\epsilon\text{ whenever }0<|x-a|\le\delta\cr
&\iff\Forall\epsilon>0\Exists\delta>0,\,|f(x)-f(a)-b(x-a)|
\text{ exists }\le\epsilon|x-a|\text{ whenever }0<|x-a|\le\delta\cr
\displaycause{multiplying or dividing both sides of the inequality by the strictly positive number $|x-a|$, and remembering that $|y||z|=|yz|$ for all $y$, $z\in\Bbb R$}
&\iff\Forall\epsilon>0\Exists\delta>0,\,|f(x)-f(a)-b(x-a)|
\text{ exists }\le\epsilon|x-a|\text{ whenever }|x-a|\le\delta\cr
}$$
\noindent because if $x=a$ then $|f(x)-f(a)-b(x-a)|=0=\epsilon|x-a|$.
\medskip
{\bf Theorem} (Chain Rule for differentiable functions) Let $f$ and $g$ be real functions, and suppose that $g$ is differentiable at $a$ and that $f$ is differentiable at $g(a)$. Then $f\circ g$ is differentiable at $a$, with $(f\circ g)'(a)=f'(g(a))\cdot g'(a)$.
\medskip
\noindent{\bf proof} Write $c=f'(g(a))$, $b=g'(a)$. Let $\epsilon>0$. Set $\eta=\min(1,\Bover1{1+|b|+|c|})$. Let $\delta_1>0$ be such that $g(x)$ is defined and $|g(x)-g(a)-b(x-a)|\le\eta|x-a|$ whenever $|x-a|\le\delta_1$.
Then
\inset{$|g(x)-g(a)|\le|b(x-a)|+\eta|x-a|=(|b|+\eta)|x-a|$
\hfill[key step}
\noindent whenever $|x-a|\le\delta_1$. Let $\delta_2>0$ be such that $f(y)$ is defined and $|f(y)-f(g(a))-c(y-g(a))|\le\eta|y-g(a)|$ whenever
$|y-g(a)|\le\delta_2$.
Set $\delta=\min(\delta_1,\Bover{\delta_2}{|b|+\eta})>0$. If $|x-a|\le\delta$, then $|x-a|\le\delta_1$ so $g(x)$ is defined and
$|g(x)-g(a)-b(x-a)|\le\eta|x-a|$ and
\Centerline{$|g(x)-g(a)|\le(|b|+\eta)|x-a|\le(|b|+\eta)\delta\le\delta_2$.}
\noindent This means that $f(g(x))$ is defined and
$$\eqalignno{|(f\circ g)(x)-(f\circ g)(a)-cb(x-a)|
&=|f(g(x))-f(g(a))-cb(x-a)|\cr
&\le|f(g(x))-f(g(a))-c(g(x)-g(a))|+|c(g(x)-g(a))-cb(x-a)|
&\text{[key step}\cr
&\le\eta|g(x)-g(a)|+|c||g(x)-g(a)-b(x-a)|\cr
\displaycause{because $|g(x)-g(a)|\le\delta_2$}
&\le\eta(|b|+\eta)|x-a|+|c|\eta|x-a|\cr
&=\eta(|b|+\eta+|c|)|x-a|
\le\eta(|b|+1+|c|)|x-a|
\le\epsilon|x-a|.\cr}$$
\noindent As $\epsilon$ is arbitrary, $(f\circ g)'(a)$ is defined and equal to $cb$, as claimed.
\bigskip
{\bf Rolle's Theorem and the Mean Value Theorem}
The last section of the course introduces one of the fundamental theorems of analysis. Before embarking on the main results we need to tidy things up a little.
\medskip
{\bf Proposition} (a) Suppose that $f$ is a real function, that $d0$; set $\epsilon=\Bover12(c-b)>0$. There is a $\delta>0$ such that $f(x)$ is defined and $|f(x)-b|\le\epsilon$ whenever $a-\delta\le xb$,}
\noindent which is impossible.\ \Bang
So $b$ must be greater than or equal to $c$, as claimed.
\medskip
{\bf (b)} Set $b=\lim_{x\downarrow a}f(x)$. \Quer\ Suppose, if possible, that $b0$; set $\epsilon=\Bover12(c-b)>0$. There is a $\delta>0$ such that $f(x)$ is defined and $|f(x)-b|\le\epsilon$ whenever
$ab$,}
\noindent which is impossible.\ \Bang
So $b$ must be greater than or equal to $c$, as claimed.
\medskip
{\bf Proposition} Let $f$ be a real function and $a$, $b\in\Bbb R$. Then $\lim_{x\to a}f(x)=b$ iff $\lim_{x\uparrow a}f(x)=\lim_{x\downarrow a}f(x)=b$.
\medskip
\noindent{\bf proof (a)} Suppose that $\lim_{x\to a}f(x)=b$.
\medskip
\quad{\bf (i)} Let $\epsilon>0$. Then there is a $\delta>0$ such that $f(x)$ is defined and $|f(x)-b|\le\epsilon$ whenever $0<|x-a|\le\delta$. If now
$a-\delta\le x0$. Then there is a $\delta>0$ such that $f(x)$ is defined and $|f(x)-b|\le\epsilon$ whenever $0<|x-a|\le\delta$. If now
$a0$. Then
\inset{there is a $\delta_1>0$ such that $x\in\dom f$ and
$|f(x)-b|\le\epsilon$ whenever $a-\delta_1\le x0$ such that $x\in\dom f$ and
$|f(x)-b|\le\epsilon$ whenever $a0$. If
$0<|x-a|\le\delta$, then
\inset{{\it either} $xa$, in which case
$a**