Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Analysis 1
Description: Introduction to first-year mathematical analysis. Topics covered: - Basics of proofs - Properties of Limits - Definition of Convergence - Bolzano-Weierstrass Theorem - Sequences and their properties - Series and their properties - Functions and their properties - Continuity - Intermediate Value Theorem and its applications - Differentiation - Mean Value Theorem and its applications

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


2014/15

1101 ANALYSIS 1
Based on the lectures by Prof Leonid Parnovski

Jonathan Low

Chapter 1: The Real Numbers ℝ
ℕ⊂ℤ⊂ℚ⊂ℝ⊂ℂ

Natural Numbers ℕ
ℕ ≔ { 1, 2, 3, 4, … }

Remark: 0 ∉ ℕ

For Addition:
𝑛, 𝑚 ∈ ℕ↝

𝑚+ 𝑛 ∈ ℕ

Remark: “↝”
means “leads to”

For Multiplication:
𝑛 , 𝑚 ∈ ℕ ↝ 𝑚𝑛 ∈ ℕ
For Ordering:
∀𝑚, 𝑛 ∈ ℕ we must have that 𝑚 > 𝑛 or 𝑚 < 𝑛 or 𝑚 = 𝑛

There are several properties of natural numbers (laws that must always be followed):
1
...
𝑚 + 𝑛 = 𝑛 + 𝑚
ii
...
Associative Law:
i
...
(𝑚𝑛)𝑝 = 𝑚(𝑛𝑝)
3
...
(𝑚 + 𝑛)𝑝 = 𝑚𝑝 + 𝑛𝑝
4
...
𝑚 > 𝑛, 𝑛 > 𝑝, then 𝑚 > 𝑝
ii
...
𝑚 > 𝑛, 𝑝 ∈ ℕ ⇒ 𝑚𝑝 > 𝑛𝑝

Remark: Note that 𝑝 ∈ ℕ
must be emphasized here
...


Problem: Solving Equations
If 𝑥 + 𝑚 = 𝑛, then we know that 𝑥 = 𝑛 − 𝑚
...
Therefore, we proceed to introduce integers
...
𝑚 = 𝑛 we have that 𝑥 =

𝑛
𝑚


...


Rational Numbers ℚ
ℚ={

𝑝
∶ 𝑝, 𝑞 ∈ ℤ, 𝑞 ≠ 0}
𝑞

We essentially have the laws (assuming 𝑞 ≠ 0 ≠ 𝑛):
𝑝 𝑚
𝑝𝑚
=
𝑞 𝑛
𝑞𝑛
𝑝
𝑚
𝑝𝑛 + 𝑞𝑚
+ =
𝑞
𝑛
𝑞𝑛
We also have if 𝑞 ≠ 0 ≠ 𝑛:
𝑝
𝑝𝑛
=
𝑞
𝑞𝑛
It can be concluded that a rational number is a fraction

𝑝
𝑞

, where 𝑝 and 𝑞 are co-prime

(meaning that there is no common prime factor between 𝑝 and 𝑞 , while 𝑝 ∈ ℤ and 𝑞 ∈ ℕ)

Now consider:
𝑥 2 = 2 ⇒ 𝑥 = √2
And so 𝑥 ∉ ℚ, we need to introduce the other subset of Real numbers ℝ (irrationals)
...


Page 2 of 83

Theorem: The irrationality of √2
...


To prove this theorem, we first need to prove a sub-theorem, called a lemma
...

Proof:
If 𝑛 is odd,

⇒ 𝑛 = 2𝑘 + 1 ,

𝑘∈ℤ

⇒ 𝑛2 = (2𝑘 + 1)2 = 4𝑘 2 + 4𝑘 + 1
= 2(2𝑘 2 + 2𝑘) + 1
⇒ 𝑛2 is odd
...

Corollary: If 𝑛 ∈ ℤ is even, then 𝑛 is also even
...
We will be using a
method called proof by contradiction
...
t
...


So we have = 𝑞 , 𝑝 ∈ ℤ , 𝑞 ∈ ℕ and that 𝑝 and 𝑞 are co-prime
...
This therefore contradicts our
𝑝
original assumption that 𝑥 = 𝑞 ∈ ℚ
...
t
...



Page 3 of 83

Modulus or Absolute Values
There now lies a need to define ℝ, real numbers:
Perhaps it is a number which can be represented by a decimal or fractions? One problem
we encounter with this standard answer is that we cannot define operations such as
̅
̅
addition, or even worse, multiplication
...
9 = 1
...
Is this true?
We may first want to get some important definitions straight:
Definition:

Modulus or Absolute value –
−𝑥,
|𝑥| = {
𝑥,

Definition:

𝑥<0
𝑥≥0

A useful property to note is
that |𝑥|2 = 𝑥 2 for all cases

If 𝑥, 𝑦 ∈ ℝ, then the distance between the two
points 𝑥 and 𝑦 is |𝑥 − 𝑦|
...
We know that 𝑞 ≠ √2 as proven previously, so
after some manipulation we can also say that 𝑝2 − 2𝑞 2 ≠ 0
...


Page 4 of 83

Therefore, we proceed to manipulate the RHS:
𝑝

| 𝑞 − √2| =

1
𝑞2


...

𝒏

𝒑𝒏

𝒒𝒏

𝒑𝒏
𝒒𝒏

1

1

1

1

2

3

2

1
...
4

4

17

12

1
...


5

41

29

1
...


We note a pattern, and that is as 𝑛 → ∞,

Proposition:

∀𝑛 ∈ ℕ, we have

𝑝2
𝑛



𝑝𝑛
𝑞𝑛

2𝑞 2
𝑛

→ √2
...
To do so, we need the statement to satisfy that
the proposition 𝑃 𝑛 holds for each 𝑛 ∈ ℕ
...

Now assume that 𝑃 𝑘 is true for 𝑛 = 𝑘:
𝑝2 − 2𝑞 2 = ±1
𝑘
𝑘

Page 5 of 83

𝑝2 − 2𝑞 2 = ±1
𝑘+1
𝑘+1

Then now we need to prove that 𝑃 𝑘+1 is also true:

𝐿𝐻𝑆 = (𝑝 𝑘 + 2𝑞 𝑘 )2 − 2(𝑝 𝑘 + 𝑞 𝑘 )2
= 𝑝2 + 4𝑝 𝑘 𝑞 𝑘 + 4𝑞 2 − 2𝑝2 − 4𝑝 𝑘 𝑞 𝑘 − 2𝑞 2
𝑘
𝑘
𝑘
𝑘
= −𝑝2 + 2𝑞 2
𝑘
𝑘
= −(𝑝2 − 2𝑞 2 )
𝑘
𝑘
= ∓1
= 𝑅𝐻𝑆
Also, we have 𝑞 𝑘+1 = 𝑝 𝑘 + 𝑞 𝑘 ≥ 𝑘 + 1
...

-

Attempt Homework 1

Axioms for the Set of Real Numbers
Definition:

Assume that 𝑋 is a set where two binary operations are addition and
multiplication
...
t
...
t
...
e
...
𝑐)

A6:

𝑎
...
t
...
1 = 𝑥

A8:

∀𝑥 ∈ 𝑋,

A9:

𝑥
...
t
...
e 𝑦 = 𝑥 −1 )

The axioms A1 – A4 mean that 𝑋 is a commutative group w
...
t
...
0 is called the
additive identity
...
r
...
1 is called the multiplicative identity
...
𝑐

From these, we can therefore conclude to a certain extent that a field is a set of elements
where we can add and multiply those elements
...
For instance, we know that ℚ, ℝ are ordered fields
...
0 = 0

Proof:
We know that 0 + 0 = 0, and therefore:
𝑥(0 + 0) = 𝑥
...
0 = 𝑥
...
0) + 0
...
0 + (−𝑥
...
0 = 0
Proposition: Suppose 𝑎, 𝑏 ∈ ℝ and 𝑎 > 0 and 𝑏 > 0, then 𝑎 > 𝑏 ⇒ 𝑎2 > 𝑏 2
Proof:
Suppose 𝑎 > 𝑏, then:
𝑎2 = 𝑎
...
𝑏
⇒ 𝑎2 > 𝑎
...

𝑥, 𝑥 ≥ 0
We also know that |𝑥
...


Page 7 of 83

Theorem: |𝑥 + 𝑦| ≤ |𝑥| + |𝑦| , also known as the Triangle Inequality
...

|𝑥 + 𝑦|2 = (𝑥 + 𝑦)2 = 𝑥 2 + 2𝑥𝑦 + 𝑦 2
Now because 2𝑥𝑦 ≤ 2|𝑥||𝑦|, then we have:
𝑥 2 + 2𝑥𝑦 + 𝑦 2 ≤ |𝑥|2 + 2|𝑥||𝑦| + |𝑦|2 = (|𝑥| + |𝑦|)2
Therefore, |𝑥 + 𝑦|2 ≤ (|𝑥| + |𝑦|)2 and so |𝑥 + 𝑦| ≤ |𝑥| + |𝑦|
...
|𝑥| and |𝑦| can be two linearly independent
vectors (and hence 2 sides of a triangle), and so |𝑥 + 𝑦| would be the length of the final
side of the triangle
...


Corollary 1: The inverse triangle inequality
...
Then |𝑢 − 𝑣| ≥ |𝑢| − |𝑣|
...



Page 8 of 83

Corollary 2: Another inverse triangle inequality, where |𝑢 − 𝑣| ≥ ||𝑢| − |𝑣||
...

From corollary 1:
|𝑣 − 𝑢| ≥ |𝑣| − |𝑢|
|𝑢 − 𝑣| ≥ |𝑢| − |𝑣|
Now notice that since:
||𝑢| − |𝑣|| = {

|𝑢| − |𝑣|,
|𝑣| − |𝑢|,

𝑢− 𝑣≥0
𝑣− 𝑢≥0

Then |𝑢 − 𝑣| ≥ ||𝑢| − |𝑣||
...
Before doing so, we still need extra definitions
...
t
...

- We call 𝐻 an upper bound of 𝑆
𝑆 is bounded below if:
- ∃ℎ ∈ 𝑋 s
...
∀𝑥 ∈ 𝑆 we have 𝑥 ≥ ℎ
- We call ℎ a lower bound of 𝑆
𝑆 is bounded if it is bounded both above and below
...

Lower bounds: 1, 0, …, -5,… etc
...


Page 9 of 83

3) 𝑋 = ℚ,

𝑆 = {𝑥 ∈ ℚ, 𝑥 > 0, 𝑥 2 < 2} ⊂ ℚ
The biggest lower bound is 0, but note there is no ‘smallest’ upper bound
as √2 is not a rational number
...
t
...

Then there will be 𝐻′ which is smaller than 𝐻 𝑚𝑖𝑛 but not an upper bound
...
t
...
Therefore, by contradiction, we know that such 𝐻 𝑚𝑖𝑛 does not
exist
...


The Completeness Axiom
𝑋 is called a complete ordered field if, in addition to axioms A1 – A14, it also satisfies the
continuum property (the final axiom):
A15:

Suppose 𝑆 ⊂ 𝑋, 𝑆 ≠ 0
If 𝑆 is bounded above, then it has a smallest upper bound called the supremum of
𝑆 (sup 𝑆), and if 𝑆 is bounded below, then it has a largest lower bound called the
infimum of 𝑆 (inf 𝑆)
...


If we speculate closely, we will discover that the set of ℝ is essentially the only complete
ordered field
...
(This is known as the Archemedian Postulate)

Proof:
Suppose ℕ is bounded above
...

Now consider 𝐻 − 1 where:
𝐻−1< 𝐻
So 𝐻 − 1 cannot be an upper bound
...
t
...

(proof by contradiction)


Let us formulate proper definitions for supremum and infimum
...


Suppose 𝑆 ⊂ ℝ,

𝐻 = sup 𝑆 ⇔
i
...


𝑥∈ 𝑆⇒ 𝐻≥ 𝑋
∀ 𝐻 ′ < 𝐻 ⇒ ∃𝑥 ∈ 𝑆 s
...
𝑥 > 𝐻′
(This means 𝐻′ is not an upper bound)

i
...


𝑥∈ 𝑆⇒ℎ≤ 𝑋
∀ ℎ′ > ℎ ⇒ ∃𝑥 ∈ 𝑆 s
...
𝑥 < ℎ′
(This means ℎ′ is not a lower bound)

B
...

Definition:

Suppose 𝑆 ⊂ ℝ,

A
...
t
...
We say that 𝑆 has a minimum if ∃𝑥 𝑚 ∈ 𝑆 s
...
∀𝑥 ∈ 𝑆 ⇒ 𝑥 ≥ 𝑥 𝑚

Page 11 of 83

Proposition: If 𝑥 𝑚 = min 𝑆, then 𝑥 𝑚 = inf 𝑆
...
𝑥 ∈ 𝑆 ⇒ ℎ ≤ 𝑥
2
...
t
...
t
...
As such, we have 𝑥 𝑚 = inf 𝑆
...
Let us look at the different
types of bounds
...
Note there is no max 𝑆, but we have that sup 𝑆 = 1
...
When representing
infinities, we can never have 𝑥 = ∞, and therefore we always denote the end with an open
interval:
[𝑎, ∞) = { 𝑥 | 𝑎 ≤ 𝑥 }
(−∞, 𝑏) = { 𝑥 | 𝑏 > 𝑥 }
Definition:

Suppose 𝑎 ∈ ℝ and 𝜖 > 0, we say that (𝑎 − 𝜖, 𝑎 + 𝜖) is an 𝜖neighbourhood of 𝑎
...


Page 12 of 83

Chapter 2: Sequences
1

1

1

1

1

Consider the sequence 𝑥 𝑛 = 2 + 4 + 8 + ⋯ + 2 𝑛 = 1 − 2 𝑛
...
But what do we really mean when we say 𝑥 𝑛 → 1?

Definition:

A sequence is an assignment to every 𝑛 ∈ ℕ of a real number 𝑥 𝑛 ∈ ℝ
...
A sequence is denoted by
〈𝑥 𝑛 〉 or 〈𝑥 𝑛 〉∞ , and 𝑥 𝑛 are called the terms or elements of the sequence
...


Examples:
1) For 1, 2, 3, 4, … 𝑥 𝑛 = 𝑛, it is bounded below, but not above as it has a range of ℕ
2) For −1, 1, −1, 1 … this sequence is bounded and its range is {−1, 1}
3) For 〈1000 − 𝑛〉, this sequence is bounded above, but not below
...
𝑛 ∈ ℕ , 𝑛 > 𝑁 ⇒ |𝑥 𝑛 − 𝑙| < 𝜖

In other words, given any 𝜖 > 0, we can find a real number 𝑁 such that all terms of 𝑥 𝑛
where 𝑛 > 𝑁 will fall within the 𝜖-neighbourhood of 𝑙 (within a distance of 𝜖 from 𝑙)
...
As 𝑛
becomes larger, we can take 𝜖 to be even smaller as 𝑥 𝑛 will slowly get closer to 𝑙
...

Let us look at some examples
...
t
...
Therefore, if 𝑁 = log 2 𝜖 , then
it satisfies that for 𝑛 > 𝑁 ⇒ |𝑥 𝑛 − 1| < 𝜖
...


-

Attempt Homework 2
Page 14 of 83

1

Example: 𝑥 𝑛 = 𝑛
...

Proof:
Given 𝜖 > 0, we need to find 𝑁 ∈ ℝ s
...
𝑛 > 𝑁 ⇒ |𝑥 𝑛 − 𝑙| < 𝜖
1
∴ | − 0| < 𝜖
𝑛
⇒ 𝑛>

1
𝜖

1

As such, we let 𝑁 = 𝜖 , and so for any 𝑛 > 𝑁 we have that |𝑥 𝑛 − 0| < 𝜖
...



1

Remark: Any number 𝑁 larger than 𝜖 would also be valid as the chosen 𝑁
is the bare minimum for the requirement above
...
Therefore, 𝑁 can be a natural number, though it
is not necessary
...


Proof:
𝑥 𝑛 does not converge to 𝑙 means that the definition of the limit where 𝑥 𝑛 → 1 cannot be
satisfied
...
t
...
Observe that when 𝑛 = 1 the numerator will equal 0 and it will not
satisfy the condition 𝜖 > 0
...
From this, we can choose any 𝜖 ≤ 2 and
1

the inequality will be satisfied
...
We have:
|

1− 𝑛
1
|≥
𝑛
3

Which is true for 𝑛 ≥ 2
...




Remark: From the previous two examples we can hypothesize that a
sequence cannot converge to two limits
...


Example: 𝑥 𝑛 =

2𝑛2 −1
𝑛2 +1


...


Proof:
Given 𝜖 > 0, ∃𝑁 ∈ ℝ s
...
𝑛 > 𝑁 ⇒ |𝑥 𝑛 − 2| < 𝜖
...


3
𝜖

3

− 1 < 0 , then 𝑛2 > 𝜖 − 1 is always satisfied
...

2
...
And so we just need to take 𝑁 =
3

√ − 1
...




Algebra of Limits
Now we have already shown how to prove a sequence converges to a claimed limit
...
The limit is obtained as 𝑛 → ∞ if 〈𝑥 𝑛 〉 converges
...

When computing limits, it is useful to identify null
sequences, which are sequences which tend to 0

Examples of null sequences:
1

dominant term can take, namely: {𝑛 𝑝 , 𝑎 𝑛 , 𝑛!} in

1
...
(𝑐 𝑛 ), for |𝑐| < 1
3
...
Note that 𝑝 > 0

4
...
To compute complicated limits we

5
...
There are 3 forms of expressions a

are usually required to identify to dominant term

𝑐𝑛

𝑛𝑝

Notice the null sequences are any
term divided by the dominant term
...
This will procure some
null sequences which we can use, along with the algebra of limits, to compute the overall
limit of a sequence
...

1
...


𝑥 𝑛 𝑦 𝑛 → 𝑥𝑦



3
...
t
...

𝜖

Similarly 𝑦 𝑛 → 𝑦, so by definition of limits we know ∃𝑁2 ∈ ℝ s
...
𝑛 > 𝑁2 ⇒ |𝑦 𝑛 − 𝑦| < 2
...
As such, take 𝑁 =
max{𝑁1 , 𝑁2 }, then for 𝑛 > 𝑁 we definitely will have:
|𝑥 𝑛 + 𝑦 𝑛 − (𝑥 + 𝑦)| = |(𝑥 𝑛 − 𝑥) + (𝑦 𝑛 − 𝑦)|
≤ |𝑥 𝑛 − 𝑥| + |𝑦 𝑛 − 𝑦|
𝜖
𝜖
≤ +
2 2

Using the
triangle
inequality

= 𝜖
And so by definition of limits 𝑥 𝑛 + 𝑦 𝑛 → 𝑥 + 𝑦
...
In order to proceed, we need to introduce several more theorems
...
t
...
This is known as the sandwich theorem
...
t
...
This means that:
⇔ −𝜖 < 𝑥 𝑛 − 𝑙 < 𝜖
⇔ 𝑙− 𝜖 < 𝑥𝑛 < 𝑙+ 𝜖
Similarly, ∃𝑁2 ∈ ℝ s
...
𝑛 > 𝑁2 ⇒ |𝑧 𝑛 − 𝑙| < 𝜖
...


Example: Consider 𝑥 𝑛 =

(−1) 𝑛
𝑛2

1

1



1

= {−1, 4 , − 9 , 16 , … }

Note that we cannot use the algebra of limits here without thinking
...
This will be proven later on
...

|


However since
we have −

1
𝑛2

1
𝑛2

1

1

(−1) 𝑛
1
|= 2
𝑛2
𝑛

(−1) 𝑛
1
1

≤ 2
2
2
𝑛
𝑛
𝑛

= 𝑛 × 𝑛 , we have that

1
𝑛2

1

→ 0 because 𝑛 → 0
...
By the sandwich theorem,

(−1) 𝑛
𝑛2

→ 0 as well
...
t
...
t
...
t
...
t
...
Since 𝑆 is bounded, then it
has a higher and a lower bound, denote them 𝐻 and ℎ respectively
...
So ∀𝑥 ∈ 𝑆, we have
−𝑀 ≤ ℎ ≤ 𝑥 ≤ 𝐻 ≤ 𝑀 and therefore, |𝑥| ≤ 𝑀
...

Proof:
Suppose 𝑥 𝑛 → 𝑙 ∈ ℝ, then for 𝜖 = 17, ∃𝑁 ∈ ℝ s
...
𝑛 > 𝑁 ⇒ |𝑥 𝑛 − 𝑙| < 17
...




Now we can use these findings to resume our proofs of the product and quotient rules
...
We want to prove that 𝑥 𝑛 𝑦 𝑛 → 𝑥𝑦
...
t
...

Firstly, we know that ∃𝑁 s
...
𝑛 > 𝑁 we have |𝑥 𝑛 − 𝑥| < 𝜖 and |𝑦 𝑛 − 𝑦| < 𝜖
...
We have that
𝑥 𝑛 converges, therefore it is bounded, so suppose 𝜖 > 0 , ∃𝑀 > 0 s
...
There
is also the definition of a limit, where ∃𝑁1 ∈ ℝ s
...
𝑛 > 𝑁1 ⇒ |𝑥 𝑛 − 𝑥| <

𝜖
𝑀+|𝑦|

We also have that 𝑦 𝑛 converges, and so ∃𝑁2 ∈ ℝ s
...
𝑛 > 𝑁2 ⇒ |𝑦 𝑛 − 𝑦| <


...


We now want to show that |𝑥 𝑛 𝑦 𝑛 − 𝑥𝑦| < 𝜖
...
Then for
𝑛 > 𝑁, we have:
|𝑥 𝑛 𝑦 𝑛 − 𝑥𝑦| = |(𝑥 𝑛 𝑦 𝑛 − 𝑥 𝑛 𝑦) + (𝑥 𝑛 𝑦 − 𝑥𝑦)|
≤ |𝑥 𝑛 ||𝑦 𝑛 − 𝑦| + |𝑦||𝑥 𝑛 − 𝑥|
𝜖
𝜖
< 𝑀
...

𝑀 + |𝑦|
𝑀 + |𝑦|
= 𝜖(

𝑀 + |𝑦|
)
𝑀 + |𝑦|

= 𝜖
Therefore, by definition of limits, 𝑥 𝑛 𝑦 𝑛 → 𝑥𝑦
...
Notice that it is enough to show that

𝑦𝑛

𝑦𝑛



𝑥
𝑦

where 𝑦 𝑛 ≠

1

→ 𝑦 , because we can use the product rule

from there
...
t
...
In other words,

|𝑦 𝑛 | ≥ |𝑐| = 𝑐
...
t
...
t
...
Without loss of

generality we can take 𝑁 to be a natural number 𝑛 ∈ ℕ, then for 𝑛 > 𝑁 we have:
|𝑦 𝑛 | = |𝑦 − (𝑦 − 𝑦 𝑛 )|

Inverse
Triangle
inequality

≥ |𝑦| − |𝑦 − 𝑦 𝑛 |
≥ 𝑦−
=
Put 𝑐 ≔ min {|𝑦1 |, |𝑦2 |, … , |𝑦 𝑛 |,

|𝑦|
2

|𝑦|
2

|𝑦|
2

}, then for ∀𝑛 we have |𝑦 𝑛 | ≥ 0



1

1

This lemma implies that ∃𝑐 > 0 s
...
|𝑦 𝑛 | ≥ 𝑐 ∀𝑛 , and therefore |𝑦 | ≤ 𝑐
...
t
...
|𝑦| and since |𝑦 𝑛 − 𝑦| < 𝜖|𝑦|𝑐 and | 𝑦 | ≤ 𝑐 , we have:
𝑛

𝑛

𝑛

|𝑦 𝑛 − 𝑦|
...

< 𝜖|𝑦|𝑐
...
This also implies that since 𝑥 𝑛 ∙

via the product rule we obtain

𝑥𝑛
𝑦𝑛

1
𝑦𝑛

is the LHS of the quotient rule, then

𝑥

→ 𝑦
...
A sequence may diverge to +∞ , −∞
or to neither (𝑥 𝑛 = (−1) 𝑛 )
...
〈𝑥 𝑛 〉 diverges to +∞ if:


∀𝑀 ∈ ℝ , ∃𝑁 ∈ ℝ s
...


2
...
t
...
t
...
Notice there are 2
possibilities, each concerning the polarity of 𝑀
...
We can arbitrary take 𝑁 = 1
...
We therefore take 𝑁 = √𝑀
...



Page 23 of 83

It is important to note that 〈𝑥 𝑛 〉 converges ⇒ 〈𝑥 𝑛 〉 is bounded
...
For instance, look at 𝑥 𝑛 = (−1) 𝑛
...

Definition:

Let 〈𝑥 𝑛 〉 be a sequence
...
Then 𝑥 𝑛 → 𝑀 where
𝑀 = sup 𝑥 𝑛
...
Then 𝑥 𝑛 → 𝑚 where
𝑚 = inf 𝑥 𝑛
...
Now
given that 𝜖 > 0, we have 𝑀 − 𝜖 < 𝑀, where 𝑀 − 𝜖 is not an upper bound by definition of
supremum
...
t
...
And so, factoring in the supremum 𝑀 we
have:
𝑀− 𝜖 < 𝑥𝑁 ≤ 𝑥𝑛 ≤ 𝑀 < 𝑀+ 𝜖
𝑀− 𝜖 < 𝑥𝑛 < 𝑀+ 𝜖
|𝑥 𝑛 − 𝑀| < 𝜖
By definition of limits, we conclude that 𝑥 𝑛 → 𝑀
...

Page 24 of 83

Definition:

A sequence is monotone if it is increasing or decreasing
...

Proof:
1

Claim 1:

∀𝑛 we have 𝑥 𝑛 ≥ 2
1

We will prove this by induction
...
Now for 𝑛 = 𝑘,
1

1

1

1

1

1

1

we have 𝑥 𝑘 ≥ 2 , and so 𝑥 𝑘+1 = 3 (𝑥 𝑘 + 1) = 3 (𝑥 𝑘 ) + 3 ≥ 6 + 3 ≥ 2
...

Claim 2:

〈𝑥 𝑛 〉 is decreasing
...

1
1 2
∴ 𝑥 𝑛+1 − 𝑥 𝑛 = (𝑥 𝑛 + 1) − 𝑥 𝑛 = − 𝑥 𝑛
3
3 3
1

1

2

1

Because 𝑥 𝑛 ≥ 2 , we have 𝑥 𝑛+1 − 𝑥 𝑛 ≤ 3 − (3) (2) = 0
...
Assume this limit is 𝑙 and so we denote lim 𝑥 𝑛 = 𝑙
...
But
1

we have 𝑥 𝑛+1 = 3 (𝑥 𝑛 + 1), and so by the algebra of limits, we should obtain lim 𝑥 𝑛+1 =
1
3

1

𝑙 + 3
...


1
1
1
𝑙+ ⇒ 𝑙=
3
3
2

Page 25 of 83

Example: Fibonacci Sequence
...

𝐹 = {1, 1, 2, 3, 5, 8, 13, 21 … }
Denote ≔ lim𝑥 𝑛 , then 𝐿 = lim𝑥 𝑛+1 = lim𝑥 𝑛+2
...
As such, we have:
𝐿 = 2𝐿 ⇒ 𝐿 = 0
This is WRONG
...
The sequence 𝐹 diverges
...

Therefore, we can sum up the entirety of the results:


Convergent ⇒ bounded



Bounded ⇏ Convergent



Bounded and monotone ⇒ convergent

Subsequences
Definition:

Suppose 〈𝑥 𝑛 〉 is a sequence and 〈𝑗 𝑛 〉 is a strictly increasing sequence of

natural numbers, we call the new sequence 〈𝑥 𝑗 𝑛 〉 a subsequence of 〈𝑥 𝑛 〉
...

Example 1:

𝑥 𝑛 = (−1) 𝑛 , so 𝑥 𝑛 = {−1, 1, −1, 1, −1 … }
Then 𝑥2𝑛 = 1 so 𝑥2𝑛 → 1
And 𝑥2𝑛+1 = −1 so 𝑥2𝑛+1 → −1

Example 2:

1

Both convergent
to different
limits

𝜋𝑛

𝑥 𝑛 = 𝑛 + sin ( 2 )
1

1

1

1

𝑥 𝑛 = {1 + 1, 2 + 0, 3 − 1, 4 + 0, … }
We have the subsequences 𝑥2𝑛 → 0 , 𝑥4𝑛+1 → 1 , and 𝑥4𝑛+3 → −1
Page 26 of 83

Theorem: If 𝑥 𝑛 → 𝑙 , 𝑙 ∈ ℝ or 𝑙 = ±∞ , then ∀subsequences 〈𝑥 𝑗 𝑛 〉 we have 𝑥 𝑗 𝑛 → 𝑙
...

By induction:

𝑗1 ∈ ℕ , so 𝑗1 ≥ 1
Suppose 𝑗 𝑛 ≥ 𝑛 , and the sequence is strictly increasing so 𝑗 𝑛+1 >
𝑗 𝑛
...

Claim is proved
...
We need to find 𝑁 ∈ ℝ s
...
𝑛 > 𝑁 ⇒
|𝑥 𝑗 𝑛 − 𝑙| < 𝜖
...
t
...
So suppose: 𝑛 > 𝑁, then 𝑗 𝑛 ≥ 𝑛 > 𝑁, so we have
|𝑥 𝑗 𝑛 − 𝑙| < 𝜖, thus 𝑥 𝑗 𝑛 → 𝑙
...

Corollary 1:

If a sequence 〈𝑥 𝑛 〉 has two subsequences converging to different limits,
then 〈𝑥 𝑛 〉 is divergent
...
Hence, both are

divergent
...


As a similar argument to the proof regarding the previous theorem, we argue that since
𝑗 𝑛 ≥ 𝑛 and 𝑛 + 𝑀 > 𝑛 because 𝑀 is natural, suppose > 𝑁 , then 𝑛 + 𝑀 > 𝑛 > 𝑁 so we
have |𝑥 𝑛+𝑀 − 𝑙| < 𝜖
...

-

Attempt Homework 4
Page 27 of 83

Theorem: Any bounded sequence has a convergent subsequence (known as the Bolzano-

Weierstrass theorem)
Proof:
For the subsequence to be convergent, it must be bounded and monotone
...

Lemma: Every sequence has a monotone subsequence
We say that 𝑛 is a peak point if 𝑚 > 𝑛 ⇒ 𝑥 𝑚 ≤ 𝑥 𝑛
...

Case 1:

There are infinitely many peak points
...

Case 2:

There are finitely many peak points
...

Let

𝑚1 < 𝑚2 < 𝑚3 < ⋯ < 𝑚 𝑁 be the finite peak points
...
t
...
But again,
𝑗2 isn’t a peak point, so ∃𝑗3 > 𝑗2 s
...
𝑥 𝑗3 > 𝑥 𝑗2
...

And so the lemma is proved
...
With the
above lemma we know that there exists a monotone subsequence 〈𝑥 𝑗 𝑛 〉
...
Therefore, 〈𝑥 𝑗 𝑛 〉 is convergent to
sup 𝑥 𝑛 if it is increasing, or inf 𝑥 𝑛 if it is decreasing
...
This is equivalent to:
(1 − 1) + (1 − 1) + (1 − 1) … = 1 − 1 + 1 − 1 + 1 − 1 …
= 1 + (−1 + 1) + (−1 + 1) …
= 1 + 0 + 0 + 0…
=1
Why is this wrong? That is because we need to properly define a series and its properties
...


Infinite Series
Definition:

𝑁
Given an infinite series ∑∞ 𝑎 𝑛 (or ∑ 𝑛 𝑎 𝑛 ), we call 𝑆 𝑁 = ∑ 𝑛=1 𝑎 𝑛 the partial
𝑛=1

sums of the series where 𝑁 ∈ ℕ
...
We say that an infinite series converges to 𝑙 if the
𝑁=1
sequence of partial sums converges to 𝑙 (𝑙 ∈ ℝ)
...

𝑛=1
Theorem: Suppose ∑∞ 𝑎 𝑛 converges, then
𝑛=1
1
...
The ‘tail’ of the series ∑∞ 𝑎 𝑛 = 𝑎 𝑘 + 𝑎 𝑘+1 +
...

𝑛=𝑘
𝑘→∞

3
...
If ∑∞ 𝑏 𝑛 also converges, then ∑∞ (𝑎 𝑛 + 𝑏 𝑛 ) also converges, and
𝑛=1
𝑛=1
∑∞ (𝑎 𝑛 + 𝑏 𝑛 ) = ∑∞ 𝑎 𝑛 + ∑∞ 𝑏 𝑛
...
Then this also means that lim 𝑆 𝑁+1 = 𝑙
...


𝑁→∞



𝑛→∞

Proof (2):
𝑁
We assume that since ∑∞ 𝑎 𝑛 converges, then 𝑙 = lim ∑ 𝑛=1 𝑎 𝑛
...
𝑎 𝑛 ) = lim ∑ 𝑐
...
𝑎1 + 𝑐
...
𝑎 𝑁 )
𝑁→∞

= lim 𝑐(𝑎1 + 𝑎2 + ⋯ + 𝑎 𝑁 )
𝑁→∞

𝑁

Using the
Product Rule

= lim 𝑐 × lim ∑ 𝑎 𝑛
𝑁→∞

𝑁→∞

𝑛=1



= 𝑐∑ 𝑎𝑛
𝑛=1

The proof for part (4) is similar to that of part (3)
...
The
sequence of partial sums is either 0 or 1, and so the series diverges
...
Let ℎ ≥ 0, so using Bernoulli’s inequality:
|𝑥| 𝑛 = (1 + ℎ) 𝑛 ≥ 1 + 𝑛ℎ ≥ 1
So 𝑥 𝑛 does not converge to 0, and thus the series diverges
...
The partial sums are:
𝑆 𝑁 = 1 + 𝑥 + 𝑥2 … + 𝑥

𝑁

𝑥𝑆 𝑁 = 𝑥 + 𝑥 2 + 𝑥 3 … + 𝑥
𝑆 𝑁 − 𝑥𝑆 𝑁 = 1 − 𝑥
𝑆𝑁 =
1

1

𝑁+1

= 𝑆 𝑁 (1 − 𝑥)

1 − 𝑥 𝑁+1
1
𝑥 𝑁+1
=

1− 𝑥
1− 𝑥 1− 𝑥

The limit of 1−𝑥 = 1−𝑥 and the limit of
Thus, ∑∞ 𝑥 𝑛 = {
𝑛=0

1
1−𝑥

𝑁+1

𝑥 𝑁+1
1−𝑥

1

= 0
...


, |𝑥| < 1


...

∑∞
𝑛=1

1


...
Therefore:

𝑛+1

1
𝑛(𝑛 + 1)

1 1
1 1
1 1
1
1
= ( − ) +( − ) + ( − )…+ ( −
)
1 2
2 3
3 4
𝑁
𝑁+1
1
= 1−
𝑁+1
And so, lim (1 −
𝑁→∞

1

) = 1
...


Page 32 of 83

Series Tests
Here we will introduce tests which allow us to determine whether a series converges,
without the need to compute partial sums
...
We know that
𝑛=1

2𝑛
3

𝑛 +17

2𝑛

2

< 3 𝑛 = (3)

𝑛

2

and ∑∞ (3)
𝑛=1

𝑛

converges

2𝑛

(geometric series), and so ∑∞ 3 𝑛+17 should converge
...

Corollary: If 0 ≤ 𝑎 𝑛 ≤ 𝑏 𝑛 ∀𝑛 ∈ ℕ and ∑𝑎 𝑛 diverges, then ∑𝑏 𝑛 also diverges
...

Proof of theorem:
Denote the partial sums:
𝑁

An important part of this proof is
that since 𝑎 𝑛 ≥ 0 ∀𝑛, then the
sequence of partial sums 𝐴 𝑁 is
increasing (𝑆 𝑁+1 − 𝑆 𝑁 = 𝑎 𝑛+1 ≥ 0)
...
We know
that 〈𝐵 𝑁 〉 converges to sup 𝐵 𝑁 (due to its increasing property)
...
Thus, 𝐴 𝑁 ≤ 𝐵 𝑁 ≤ sup 𝐵 𝑁 = 𝐵∞
...

Thus, ∑∞ 𝑎 𝑛 converges
...

𝑛=1



Page 33 of 83

∑∞ 𝑎 𝑛 converges absolutely if the series ∑∞ |𝑎 𝑛 | converges
...


Theorem: ∑ 𝑛 |𝑎 𝑛 | converges ⇒ ∑ 𝑛 𝑎 𝑛 converges
Proof:
Given 𝑎 𝑛 , we define ~
𝑎+ ≔ {
𝑛

𝑎− ≔ {
𝑛

𝑎𝑛 , 𝑎𝑛 ≥0
0 , 𝑎𝑛 <0

0 , 𝑎𝑛 ≥0
−𝑎 𝑛 , 𝑎 𝑛 < 0

Then, we have 𝑎± ≥ 0, and 𝑎± ≤ |𝑎 𝑛 | , where 𝑎 𝑛 = 𝑎+ − 𝑎− and |𝑎 𝑛 | = 𝑎+ + 𝑎−
...
We also know that since 0 ≤ 𝑎 ± ≤ |𝑎 𝑛 |, then by the
𝑛

comparison test , both sums ∑ 𝑛 𝑎+ and ∑ 𝑛 𝑎 − converges
...


𝑛



Examples:
1
...
∑ 𝑛

1
𝑛(𝑛+1)

converges absolutely

Page 34 of 83

Suppose we have a series ∑∞ 𝑎 𝑛 and 𝑎 𝑛 ≥ 0
...
Then there are two possibilities:

i
...


ii
...


Consider the series ∑∞
𝑛=1
We know that if < 𝑞 ⇒
∑𝑛

1
𝑛𝑞

1

, 𝑝 ∈ ℝ , 𝑝 > 0
...
If ∑ 𝑛

1
𝑛𝑝

converges, then by the comparison test,

will also converge
...


Proof:
Case 1:

𝑝 ≤ 1, then

1
𝑛

1

𝑝

≥ 𝑛 since 𝑛 𝑝 ≤ 𝑛
...
Put 𝑁 = 2 𝑀 , then:
𝑁

𝑆𝑁 = ∑
𝑛=1
𝑁

≥∑
𝑛=1

1
𝑛𝑝
1
𝑛

1
1 1
1 1 1 1
1
1
= 1 + + ( + ) + ( + + + ) … + ( 𝑀−1
… + 𝑀)
2
3 4
5 6 7 8
2
+1
2
1
1 1
1 1 1 1
1
1
≥ 1 + + ( + ) + ( + + + ) … + ( 𝑀 … + 𝑀)
2
4 4
8 8 8 8
2
2
1
1
1
1
= 1 + + 2 ( ) + 4 ( ) … + 2 𝑀−1 ( 𝑀 )
2
4
8
2
1 1
1
There are M
= 1 + + …+
𝟏
2 2
2
number of s
𝟐
𝑀
=1+
2
So as 𝑀 → +∞, 1 +

𝑀
2

→ +∞
...
Thus 𝑆 𝑁 → +∞ and ∑ 𝑛

1
𝑛𝑝

diverges for 𝑝 ≤ 1
...


Again, we want to pick a value for 𝑁 for which 𝑁 is large
...

𝑆 𝑁 = 𝑆2 𝑀 −1
2 𝑀 −1

= ∑
𝑛=1

=1+

1
𝑛𝑝

1
1
1
1
1
+ 𝑝 + 𝑝…+
…+
(2 𝑀−1 ) 𝑝
(2 𝑀 − 1) 𝑝
2𝑝 3
4

1
1
1
1
1
1
1
1
≤ 1 + ( 𝑝 + 𝑝 ) + ( 𝑝 + 𝑝 + 𝑝 + 𝑝 ) … + ( 𝑀−1 𝑝 … +
)
(2
)
(2 𝑀−1 ) 𝑝
2
2
4
4
4
4
1
1
= 1 + 2 ( 𝑝) + 4 ( 𝑝) … + 2
2
4
=1+

1
2

+
𝑝−1

1

…+
𝑝−1

4

𝑀−1

(
(2

1
𝑀−1 ) 𝑝

)

1
(2

𝑀−1 ) 𝑝−1

𝑀−1

1 𝑝−1
= ∑ ( 𝑛)
2
𝑛=0

1

Since > 1 ⇒ 𝑝 − 1 > 0 ⇒ 2 𝑝−1 > 1 ⇒ 2 𝑝−1 < 1
...
And so, ∑∞ (2 𝑝−1 )
𝑛=0

𝑛

converges
...
Therefore, 〈𝑆 𝑁 〉 is also bounded
...


Remark 1: The sum 𝜁(𝑝) = ∑∞
𝑛=1

1
𝑛𝑝

is


1
𝑛𝑝

is called the Riemann-Zeta Function

1

Remark 2: The infinite series ∑∞ 𝑛 is called the harmonic series and
𝑛=1
this series diverges despite the fact that 𝑎 𝑛 → 0 as 𝑛 → +∞

-

Attempt Homework 5
Page 36 of 83

1

1

1

1

Example: Consider the series 1 − 2 + 3 − 4 + 5 …
1
∴ 𝑎 𝑛 = (−1) 𝑛−1 ( )
𝑛
1

1

Since |𝑎 𝑛 | = 𝑛 and ∑ 𝑛 diverges, then the series cannot converge absolutely
...

Theorem: Suppose 𝑎 𝑛 ≥ 0 , 𝑎 𝑛 → 0 and 𝑎 𝑛 is decreasing
...

𝑛=0

This is known as the Alternating Series Test
...
We will now
make several claims to prove the theorem
...


Claim 2:
-

〈𝑆2𝑁−1 〉 is increasing
...


Claim 3:
-

〈𝑆2𝑁 〉 is decreasing
...

Let this limit be 𝐿1
...
Let this
limit be 𝐿2
...
Therefore the
sequence of partial sums converge ⇒ series ∑∞ (−1) 𝑛 𝑎 𝑛 converges
...


Suppose we need to find out whether the series ∑ 𝑎 𝑛 converges
...

2) Suppose 𝑎 𝑛 → 0, 𝑎 𝑛 ≥ 0
...
Comparison Test/ Limit Comparison Test/ Improved Comparison Test
(Compare with simple series like ∑𝑥 𝑛 , ∑

1
𝑛

𝑝

,∑

1

…)

𝑛(𝑛+1)

B
...
Root Test
3) Suppose 𝑎 𝑛 are not all positive, then we check whether ∑|𝑎 𝑛 | converges
...

Page 38 of 83

Theorem: Suppose 𝑎 𝑛 , 𝑏 𝑛 > 0 ∀𝑛, and there is a real number 𝑙 ∈ ℝ\{0} such that
lim

𝑎𝑛

𝑛→∞ 𝑏 𝑛

= 𝑙
...
This means that on the contrary, ∑𝑏 𝑛

diverges ⇒ ∑𝑎 𝑛 diverges
...

Proof:
What does it mean for lim

𝑎𝑛

𝑛→∞ 𝑏 𝑛

= 𝑙?
𝑎

𝑙

Given 𝜖 > 0, ∃𝑁 ∈ ℕ s
...
And so take 𝜖 = 2 , we have:
𝑛

|

𝑎𝑛
𝑙
− 𝑙| <
𝑏𝑛
2

𝑙
𝑎𝑛
𝑙
− +𝑙<
< +𝑙
2
𝑏𝑛 2
𝑙
3𝑙
𝑏𝑛 < 𝑎𝑛 <
𝑏
2
2 𝑛
3𝑙

Now we have that if ∑𝑏 𝑛 converges, then ∑ 2 𝑏 𝑛 also converges
...
Taking it the other way around, if ∑𝑎 𝑛 converges, then ∑ 2 𝑏 𝑛
also converges, and so by the scalar product rule we have ∑𝑏 𝑛 also converges
...
Then:
i
...


∑𝑎 𝑛 diverges ⇒ ∑𝑏 𝑛 diverges

This is known as the Improved Comparison Test
...
This was proved earlier when we were proving the properties of
𝑛=1
a convergent series
...
And also, lim

|𝑎 𝑛+1 |

𝑛→∞ |𝑎 𝑛 |

i
...


𝑙 > 1 ⇒ ∑𝑎 𝑛 is divergent and |𝑎 𝑛 | → +∞

iii
...

Proof (i):
Suppose 𝑙 < 1, then we choose 𝜖 > 0 s
...
𝑙 + 𝜖 < 1, then ∃𝑁 ∈ ℕ s
...
∀𝑛 > 𝑁 we have
|𝑎 𝑛+1 |

|

|𝑎 𝑛 |

− 𝑙| < 𝜖, by definition of a limit
...
Therefore we take 𝑘 ∈ ℕ so that 𝑛 ≥ 𝑁 + 𝑘
...
Also, ∀𝑛 > 𝑁 we
𝑘=0
have lim

|𝑎 𝑛+1 |

𝑛→∞ |𝑎 𝑛 |

< 1 ⇒ |𝑎 𝑛 | ≤ |𝑎 𝑁+1 | because 𝑛 ≥ 𝑁 + 𝑘
...

𝑛=1



Page 40 of 83

Proof (ii):
Assume lim

|𝑎 𝑛+1 |

𝑛→∞ |𝑎 𝑛 |

|𝑎 𝑛+1 |

𝑁⇒|

|𝑎 𝑛 |

= 𝑙 > 1, then we choose 𝜖 > 0 s
...
𝑙 − 𝜖 > 1
...
t
...


|

|𝑎 𝑛+1 |
|𝑎 𝑛+1 |
− 𝑙| < 𝜖 ⇔ 𝑙 − 𝜖 <
< 𝑙+ 𝜖
|𝑎 𝑛 |
|𝑎 𝑛 |
⇒ |𝑎 𝑛+1 | > |𝑎 𝑛 |(𝑙 − 𝜖)

Similar to the proof in (i) we consider 𝑘 ∈ ℕ s
...

|𝑎 𝑁+𝑘 | ≥ |𝑎 𝑁+𝑘−1 |(𝑙 − 𝜖) ≥ |𝑎 𝑁+𝑘−2 |(𝑙 − 𝜖)2 … ≥ |𝑎 𝑁+1 |(𝑙 − 𝜖) 𝑘−1
As |𝑎 𝑁+1 | > 0 and 𝑙 − 𝜖 > 1, then as 𝑘 → ∞ we have (𝑙 − 𝜖) 𝑘−1 → ∞, and as such
|𝑎 𝑁+1 |(𝑙 − 𝜖) 𝑘−1 → +∞
...

We substitute in 𝑛 = 𝑁 + 𝑘 by definition, and so we obtain ∀𝑛 > 𝑁, |𝑎 𝑛 | → +∞ as 𝑛 → ∞
...




𝑛
Theorem: Suppose √|𝑎 𝑛 | → 𝑙 ≥ 0 as 𝑛 → ∞, then:

i
...


𝑙 > 1 ⇒ ∑𝑎 𝑛 is divergent and |𝑎 𝑛 | → +∞

iii
...

The method of proof follows a similar concept to the ratio test
...
For instance,
let us look at power series
...




We know that exp(𝑥) is essentially 𝑒 𝑥 , and by law of indices we have 𝑒 𝑥+𝑦 = 𝑒 𝑥 𝑒 𝑦
...
Assume we have two infinitely long series denoted ∑𝑎 𝑛 and ∑𝑏 𝑛 , then to
multiply them together:




∑ 𝑎 𝑛 ∑ 𝑏 𝑛 = (𝑎0 + 𝑎1 + 𝑎2 … )(𝑏0 + 𝑏1 + 𝑏2 … )
𝑛=0

𝑛=0

Page 42 of 83

Let us now draw a table:

𝒄𝟎

𝒂𝟎
𝒃𝟎

+

𝒂𝟏

𝒄𝟏

+

𝒄𝟐

𝒂𝟐

+

𝑎0 𝑏0

𝑎1 𝑏0
𝑎1 𝑏1

𝑎2 𝑏1

𝑎0 𝑏2

𝑎1 𝑏2



𝑎2 𝑏0

𝑎0 𝑏1

𝒄𝟑

𝑎2 𝑏2

+



𝒃𝟏
+

𝒃𝟐
+






We denote the sum of each diagonal as 𝑐 𝑛 , where:
𝑐 𝑛 = 𝑎0 𝑏 𝑛 + 𝑎1 𝑏 𝑛−1 … + 𝑎 𝑛−1 𝑏1 + 𝑎 𝑛 𝑏0
𝑛

= ∑ 𝑎 𝑘 𝑏 𝑛−𝑘
𝑘=0

Now by inference:








𝑛

∑ 𝑎 𝑛 ∑ 𝑏 𝑛 = ∑ 𝑐 𝑛 = ∑ ∑ 𝑎 𝑘 𝑏 𝑛−𝑘
𝑛=0

𝑛=0

𝑛=0

(∗)

𝑛=0 𝑘=0

However, it is important to note that this equation is not always true
...


Theorem: Suppose ∑𝑎 𝑛 and ∑𝑏 𝑛 converge absolutely
...
Moreover,
∑𝑐 𝑛 will also be absolutely convergent
...

Corollary: exp(𝑥 + 𝑦) = exp(𝑥) exp(𝑦)
Proof:
We essentially have:




𝑛=0

𝑛=0

𝑥𝑛
𝑦𝑛
exp(𝑥) exp(𝑦) = ∑

𝑛!
𝑛!
Page 43 of 83

On the other hand,


(𝑥 + 𝑦) 𝑛
exp(𝑥 + 𝑦) = ∑
𝑛!

Using the
Binomial
Theorem

𝑛=0


𝑛

𝑛=0

𝑘=0



𝑛

1
𝑛
= ∑ ∑ ( ) 𝑥 𝑘 𝑦 𝑛−𝑘
𝑛!
𝑘
1
𝑛!
=∑ ∑
𝑥 𝑘 𝑦 𝑛−𝑘
𝑛!
𝑘! (𝑛 − 𝑘)!
𝑛=0


𝑘=0

𝑛

=∑∑
𝑛=0 𝑘=0

𝑥𝑘
𝑦 𝑛−𝑘
(
)
𝑘! (𝑛 − 𝑘)!

Now we want to morph this equation to be the same as equation (∗), where 𝑎 𝑘 =

𝑥𝑘
𝑘!

and

𝑦 𝑛−𝑘

𝑏 𝑛−𝑘 = (𝑛−𝑘)!
...
We say that lim 𝑓(𝑥) = 2
...
We say that lim 𝑓(𝑥) = 3
...
So what do we mean
when we use all these terms?

Limits of Functions
Definition:
i
...
𝑥 ∈ (𝑎, 𝑎 + 𝛿) ⇒ |𝑓(𝑥) − 𝑙| < 𝜖
ii
...
𝑥 ∈ (𝑏 − 𝛿, 𝑏) ⇒ |𝑓(𝑥) − 𝑙| < 𝜖
iii
...
This is the same as
saying 𝑓 is defined at (𝑎, 𝑐) ∪ (𝑐, 𝑏)
...
0 < |𝑥 − 𝑐| < 𝛿 ⇒ |𝑓(𝑥) − 𝑙| < 𝜖

𝑦 = 𝑓(𝑥)
𝑙

𝑐− 𝛿 𝑐

𝑐+ 𝛿

Page 45 of 83

1

Example: 𝑓(𝑥) = 1 + 𝑥 2 cos ( 𝑥)
Claim: lim 𝑓(𝑥) = 1
𝑥→0

Proof:
Given 𝜖 > 0, we need to find 𝛿 > 0 s
...
|𝑥 − 0| < 𝛿 ⇒ |𝑓(𝑥) − 1| < 𝜖
...
Therefore,
𝑥

take 𝛿 = √ 𝜖, so we have:
|𝑥| < 𝛿 ⇒ |𝑥 2 | < 𝛿 2 = 𝜖
1
|𝑓(𝑥) − 1| = |𝑥 2 cos ( ) − 𝑙| ≤ 𝑥 2 < 𝜖
𝑥
And therefore, when 𝑥 → 0, 𝑓(𝑥) → 1
...




Notice as we tackle proofs as such, we need to pick 𝛿 skillfully such that we have the
original bounding condition implying that |𝑓(𝑥) − 𝑙| < 𝜖
...
Let us take a look at other examples, and their respective methods of proof
...
t
...
We have:
𝑥 < 1 , |𝑥 − 1| < 𝛿 ⇒ |2𝑥 − 2|
= 2|𝑥 − 1| < 𝜖
Page 46 of 83

So we have to obtain a restriction on |𝑥 − 1|, and we immediately see that
𝜖

𝜖

|𝑥 − 1| <
...





𝑥→1

Example: Prove that lim 𝑥 2 = 9
𝑥→3

Proof:
Given 𝜖 > 0, we need to find 𝛿 > 0 s
...
|𝑥 − 3| < 𝛿 ⇒ |𝑥 2 − 9| < 𝜖
...
In order to obtain a suitable restriction on
|𝑥 − 3|, it is necessary to consider the effect such a restriction has on |𝑥 + 3|
...
Therefore we can assume 𝛿 ≤ 1, and
we get:
0 < |𝑥 − 3| < 1
Which can be manipulated to:
−1 < 𝑥 − 3 < 1
2< 𝑥<4
5< 𝑥+3 <7
In other words,
|𝑥 + 3||𝑥 − 3| < 7|𝑥 − 3|
Therefore, if we want the inequality |𝑥 2 − 9| < 𝜖 to hold, then it is enough to prove
that 7|𝑥 − 3| < 𝜖
...




𝑥→3

Example: Prove that lim(𝑥 2 − 5𝑥 + 7) = 3
...
t
...
Now we
want to find a suitable restriction on |𝑥 − 1|, so:
|𝑥 2 − 5𝑥 + 4| = |(𝑥 − 1)2 − 3(𝑥 − 1)|

Using the
Triangle
inequality

≤ |𝑥 − 1|2 + 3|𝑥 − 1|
Therefore, if 𝛿 ≤ 1, then we have:
|𝑥 − 1| < 1 ⇒ |𝑥 − 1|2 + 3|𝑥 − 1| < 4|𝑥 − 1|
Observe now that if we want the inequality |(𝑥 2 − 5𝑥 + 7) − 3| < 𝜖 to hold, it is
only necessary to prove that 4|𝑥 − 1| < 𝜖
...

𝑥→1



Page 48 of 83

𝑥∉ℚ

0 ,

Example: 𝑓(0,1) → ℚ , given by 𝑓(𝑥) = { 1
𝑞

,

𝑥∈ℚ,

𝑝
𝑞

𝑥= and 𝑝&𝑞 are coprime

Claim: ∀𝑐 ∈ (0,1), we have lim 𝑓(𝑥) = 0
𝑥→𝑐

Proof:
Given that 𝜖 > 0 we need to find 𝛿 > 0 s
...
Choose 𝑁 so
1

that 𝑁 > 𝜖 or

1
𝑁

< 𝜖
...
We choose 𝛿 so small

that (𝑐 − 𝛿, 𝑐) ∪ (𝑐, 𝑐 + 𝛿) contains no points {𝑧1 … 𝑧 𝑛 } , then 0 < |𝑥 − 𝑐| < 𝛿 ⇒
𝑥 ≠ 𝑧 𝑗 ∀𝑗 so therefore |𝑓(𝑥)| < 𝜖
...
|𝑥 − 𝑐| < 𝛿, 𝑥 ∈ 𝑆 ⇒ |𝑓(𝑥) − 𝑓(𝑐)| < 𝜖

Remark: 𝑓 is continuous on 𝑆 means that 𝑓 is continuous at each point 𝑐 ∈ 𝑆
...


A more concise definition is that 𝑓 is continuous at 𝑐 if and only if:
lim 𝑓(𝑥) = lim 𝑓(𝑥) = lim 𝑓(𝑥) = 𝑓(𝑐)


𝑥→𝑐 +

0 , 𝑥∉ℚ
Example: 𝑓(𝑥) = {1 , 𝑥 ∈ ℚ =
𝑞

𝑥→𝑐

𝑥→𝑐

𝑝
𝑞

Then lim 𝑓(𝑥) = 0, and so 𝑓 is continuous at irrational points, but discontinuous at
𝑥→𝑐

rationals
...

Proof:
For every 𝜖 > 0 we need to determine those 𝑥 for which |𝑓(𝑥) − 𝑓(0)| < 𝜖
...
So if we let 𝛿 = √ 𝜖, then when |𝑥 − 0| < 𝛿 ⇒
|𝑓(𝑥) − 𝑓(0)| < 𝜖
...

𝑥→0

𝑥→0

Hence, 𝑓 is continuous at 0
...


𝑓 is defined on (𝑎, 𝑏), except possibly at 𝑐 ∈ (𝑎, 𝑏)
...
t
...

ii
...

lim 𝑓(𝑥) = 𝑙 ⇔ for any sequence 〈𝑥 𝑛 〉 with 𝑥 𝑛 ∈ (𝑎, 𝑏) so 𝑥 𝑛 > 𝑎, we have

𝑥→𝑎+

𝑥 𝑛 → 𝑎 ⇒ 𝑓(𝑥 𝑛 ) → 𝑙
...


𝑓: 𝑆 → ℝ is continuous at 𝑐 ∈ 𝑆 ⇔ for each sequence 〈𝑥 𝑛 〉 with 𝑥 𝑛 ∈ 𝑆 and
𝑥 𝑛 → 𝑐 ⇒ 𝑓(𝑥 𝑛 ) → 𝑓(𝑐)
...
And so their
limits are the same, meaning that if 𝑥 → 𝑐 ⇒ 𝑓(𝑥) → 𝑙, then 𝑥 𝑛 → 𝑐 ⇒ 𝑓(𝑥 𝑛 ) → 𝑙
...

Page 50 of 83

Proof of (iii):
⇒ Suppose 𝑓 is continuous at 𝑐 and 〈𝑥 𝑛 〉 is a sequence with 𝑥 𝑛 ∈ 𝑆 and 𝑥 𝑛 → 𝑐
...

Since 𝑓 is continuous, then given 𝜖 > 0, ∃𝛿 > 0 s
...

However, we know that 𝑥 𝑛 → 𝑐, so given 𝛿 > 0, ∃𝑁 ∈ ℝ s
...
𝑛 > 𝑁 ⇒ |𝑥 𝑛 − 𝑐| < 𝛿, by
definition of a limit
...
Therefore, we want 𝑥 𝑛 to fall within the 𝛿-neighbourhood
of 𝑐
...
So |𝑓(𝑥 𝑛 ) − 𝑓(𝑐)| < 𝜖
...

𝑛→∞

⇐ Now suppose that whenever 𝒙 𝒏 → 𝒄, this implies that 𝒇(𝒙 𝒏 ) → 𝒇(𝒄)
...

We will prove this by use of a contradiction
...
t
...
t
...
t
...

Then,
𝑐−
1

1
1
≤ 𝑥𝑛 ≤ 𝑐+
𝑛
𝑛

1

lim 𝑐 − 𝑛 = 𝑐 and lim 𝑐 + 𝑛 = 𝑐, so by the sandwich theorem, 𝑥 𝑛 → 𝑐
...
In other words, 𝑥 𝑛 → 𝑐 ⇏
𝑓(𝑥 𝑛 ) → 𝑓(𝑐), which was our first statement
...
𝑓
must therefore be continuous
...
(Algebra of limits for functions)
𝑥→𝑐

𝑥→𝑐

i
...


lim[𝑓(𝑥)𝑔(𝑥)] = 𝐴𝐵

iii
...


Theorem: Suppose

𝑓(𝑥) ≤ 𝑔(𝑥) ≤ ℎ(𝑥), ∀𝑥 s
...
0 < |𝑥 − 𝑐| < 𝛿

and lim 𝑓(𝑥) =
𝑥→𝑐

lim ℎ(𝑥) = 𝐿, then lim 𝑔(𝑥) = 𝐿 also
...

𝑥→𝑐

𝑥→𝑐

Proof:
Suppose 𝑥 𝑛 → 𝑐 , 𝑥 𝑛 ≠ 𝑐 , then lim 𝑓(𝑥 𝑛 ) = lim ℎ(𝑥 𝑛 ) = 𝐿
...

Therefore, lim 𝑔(𝑥) = 𝐿
...


Page 52 of 83

1

Example: 𝑓(𝑥) = √ 𝑥 cos ( 𝑥)
1

Claim: lim √ 𝑥 cos ( 𝑥) = 0
+
𝑥→0

Proof:
1

We have that −√ 𝑥 ≤ √ 𝑥 cos ( 𝑥) ≤ √ 𝑥
...




𝑥→0

Theorem: Suppose 𝑓, 𝑔: 𝑆 → ℝ are continuous at 𝑐 ∈ ℝ, then 𝑓 + 𝑔 and 𝑓𝑔 are continuous
...


Proof of product rule:
Suppose 𝑥 𝑛 → 𝑐, then 𝑓(𝑥 𝑛 ) → 𝑓(𝑐) and 𝑔(𝑥 𝑛 ) → 𝑔(𝑐) because 𝑓 and 𝑔 are
continuous
...
𝑔 is continuous at 𝑐
...
Also, all rational functions (i
...
functions

of the form 𝑓(𝑥) =

𝑃(𝑥)
𝑄(𝑥)

), where 𝑃 and 𝑄 are polynomials, are continuous on

{𝑥 ∈ ℝ, 𝑄(𝑥) ≠ 0}
...
In particular, because exp(0) = 1, then exp(−𝑥) = exp(𝑥)
...

We will prove this theorem using 2 steps
...

Suppose −1 < 𝑥 < 1, then:
1 + 𝑥 ≤ exp(𝑥) =

1
exp(−𝑥)

But we also know that:
exp(−𝑥) ≥ 1 − 𝑥
1
1

exp(−𝑥) 1 − 𝑥
⇒ 1 + 𝑥 ≤ exp(𝑥) ≤

1
1− 𝑥

1

We have that lim 1 + 𝑥 = lim 1−𝑥 = 1, so by the sandwich theorem, we
𝑥→0

𝑥→0

have that lim exp(𝑥) = 1
...

Step 2 - exp(𝑥) is continuous on 𝑐
...
Therefore,
exp(𝑥 𝑛 ) = exp(𝑥 𝑛 − 𝑐 + 𝑐)
= exp(𝑥 𝑛 − 𝑐) exp(𝑐) → exp(𝑐)

Page 54 of 83

This is as 𝑥 𝑛 − 𝑐 → 0, we have exp(𝑥 𝑛 − 𝑐) → 1 because we have proved
that exp(𝑥) is continuous at 𝑥 = 0 in step 1
...




Theorem: Suppose 𝑔 is continuous at 𝑐 ∈ ℝ, and if 𝑓 is continuous at 𝑔(𝑐), then 𝑓∘𝑔 is
continuous at 𝑐
...
Now since:
𝑓 ∘ 𝑔(𝑥 𝑛 ) = 𝑓(𝑔(𝑥 𝑛 )) → 𝑓(𝑔(𝑐)) = 𝑓 ∘ 𝑔(𝑐)
⇒ 𝑓 ∘ 𝑔(𝑥) is continuous at 𝑐
...


Properties of Continuous Functions
Suppose 𝑎, 𝑏 ∈ ℝ and 𝑓: [𝑎, 𝑏] → ℝ is continuous
...

𝑓([𝑎, 𝑏]) = {𝑓(𝑥)| 𝑥 ∈ [𝑎, 𝑏]}
Theorem 1: If 𝑓: [𝑎, 𝑏] → ℝ is continuous, then 𝑓 is bounded
...
∃𝑀 = sup 𝑓([𝑎, 𝑏])
2
...
𝑀 = 𝑓(𝑥max ) and 𝑚 = 𝑓(𝑥min )
And 𝑓(𝑥min ) = 𝑚 ≤ 𝑓(𝑥) ≤ 𝑀 = 𝑓(𝑥max ) ∀𝑥 ∈ [𝑎, 𝑏]
...
𝑓: [𝑎, 𝑏] → ℝ is continuous if
i
...


At any point 𝑐 ∈ (𝑎, 𝑏) we have lim 𝑓(𝑥) = 𝑓(𝑐)
𝑥→𝑐

lim 𝑓(𝑥) = 𝑓(𝑎) and lim− 𝑓(𝑥) = 𝑓(𝑏)

𝑥→𝑎+

𝑥→𝑏

Proof of Theorem 1:
Assume 𝑓 is not bounded, say 𝑓 is unbounded above
...
t
...
In particular, ∀𝑛 ∈ ℕ,
∃𝑥 = 𝑥 𝑛 ∈ [𝑎, 𝑏] s
...

Therefore, based on our assumption that 𝑓 is unbounded, as 𝑛 → +∞, then 𝑓(𝑥 𝑛 ) → +∞
...
So let this subsequence be 〈𝑥 𝑗 𝑛 〉:
𝑥 𝑗 𝑛 → 𝑥∞ , 𝑥∞ = lim(𝑥 𝑗 𝑛 )
Claim: 𝑥∞ ∈ [𝑎, 𝑏]
Proof of claim:
Suppose that this is not true
...
That means 𝑥∞ lies
outside the interval [𝑎, 𝑏]
...

𝑥 𝑗 𝑛 ∈ (𝑥∞ − 𝜖, 𝑥∞ + 𝜖)
If we choose a small 𝜖 > 0 s
...
𝑥∞ + 𝜖 < 𝑎, this contradicts the statement that
𝑥 𝑗 𝑛 ∈ [𝑎, 𝑏]
...




And so, we know that 𝑥∞ ∈ [𝑎, 𝑏]
...
But
𝑓(𝑥∞ ) is supposed to be a real number, and therefore this contradicts that 𝑓(𝑥 𝑛 ) → +∞
...



Page 56 of 83

Definition:

A set 𝑆 ⊂ ℝ is said to be compact or sequentially compact if for every
sequence 𝑥 𝑛 ∈ 𝑆 there is a convergent subsequence 𝑥 𝑗 𝑛 s
...
lim(𝑥 𝑗 𝑛 ) ∈ 𝑆
...


Remark: It is not always true
that 𝑥 𝑛 ≥ 𝑎 ⇒ lim(𝑥 𝑛 ) ≥ 𝑎

Because we know the definitions of supremum and infimum, we can infer that Theorem
2 ⇒ Theorem 1
...

Proof of Theorem 2:
Suppose 𝑓: [𝑎, 𝑏] → ℝ is continuous
...
This
means:
∃𝑀 = sup 𝑓([𝑎, 𝑏])
Claim: There will always be a sequence 𝑦 𝑛 ∈ 𝑓([𝑎, 𝑏]) s
...

By definition of a supremum, ∃𝑦 𝑛 > 𝑀 −

1
𝑛

where 𝑛 ∈ ℕ
...
Therefore,
𝑀−

1
< 𝑦𝑛 < 𝑀
𝑛

And by the sandwich theorem, as 𝑛 → ∞, then 𝑦 𝑛 → 𝑀
...
t
...

But 𝑓 is continuous at 𝑥max , so we have:
𝑀 = lim 𝑦 𝑛 = lim 𝑓(𝑥 𝑛 ) = lim 𝑓(𝑥 𝑗 𝑛 ) = 𝑓(lim 𝑥 𝑗 𝑛 ) = 𝑓(𝑥max )
Similarly, we can find 𝑥min s
...
𝑓(𝑥min ) = 𝑚 = inf 𝑓([𝑎, 𝑏])
...
t
...


This is known as the Intermediate Value Theorem
...
Along with this intermediate value theorem, it is implied that
𝑓([𝑎, 𝑏]) = [𝑓(𝑥min ), 𝑓(𝑥max )], which states that the image of a closed interval is also a
closed interval
...


𝑓([𝑎, 𝑏]) is bounded by theorem 1

ii
...


Remark: It is not always true that
𝑎 = 𝑥min and 𝑏 = 𝑥max
...


𝑓([𝑎, 𝑏]) contains everything inbetween by
the intermediate value theorem
...
t
...

𝑓(𝑐) < 𝜆

𝑓(𝑥) > 𝜆
}
...
Let us use the definition of continuity at 𝑐
...
t
...

If 𝜆 = 𝑓(𝑎), we will take 𝑐 = 𝑎
...
If not, then we shall
assume 𝑓(𝑎) < 𝜆 < 𝑓(𝑏)
...

𝑦

𝜆

𝑥
𝑎

𝑏

𝑆

Because 𝑓(𝑎) ≤ 𝜆 ≤ 𝑓(𝑏) and 𝜆 ≠ 𝑓(𝑎), then 𝑆 ≠ ∅ and 𝑆 ⊂ [𝑎, 𝑏], so 𝑆 is bounded and
therefore it has a supremum
...
t
...

𝛿

𝛿

𝛿

So we take 𝑥 = 𝑐 + 2 for which we have 𝑓 (𝑐 + 2) < 𝜆, so 𝑐 + 2 ∈ 𝑆
...
Therefore 𝑓(𝑐) is not smaller than 𝜆
...
t
...
Since 𝑐 is an upper bound of 𝑆, then this implies that 𝑐 − 𝛿 is also an
upper bound of 𝑆
...
We have therefore arrived at a contradiction
...

3) Therefore, 𝑓(𝑐) = 𝜆
...
This means that:
∀𝑦 ∈ 𝐵 , ∃! 𝑥 ∈ 𝐴 s
...
𝑓(𝑥) = 𝑦

Define an inverse function:

Remark: The exclamation
mark ‘!’ means there is a
unique value of 𝑥

𝑓 −1 : 𝐵 → 𝐴 is 𝑓 −1 (𝑦) = 𝑥 ⇔ 𝑓(𝑥) = 𝑦
In particular,
𝑓 −1 ∘ 𝑓(𝑥) = 𝑥 , ∀𝑥 ∈ 𝐴
𝑓 ∘ 𝑓 −1 (𝑦) = 𝑦 , ∀𝑦 ∈ 𝐵

Example: 𝑓(𝑥) = 𝑥 𝑛 , 𝑛 ∈ ℕ



𝑓: [0, +∞) is a bijection, because it is surjective and injective
...

Therefore,

exp(𝑥 ′ )
exp(𝑥)

=

exp(𝑥) exp(ℎ)
exp(𝑥)

= exp(ℎ) > 1 because ℎ > 0
...
Thus, exp is strictly increasing and therefore, exp
is injective, meaning that exp(𝑥) = 𝑦 where 𝑦 cannot have more than one solution
...

Claim: exp is surjective on the interval (0, +∞)
...
(We use the
intermediate value theorem)
Proof of Claim:
Since exp(𝑥) ≥ 1 + 𝑥, we have for 𝑥 > 0:
exp(𝑥) > 1
exp(0) = 1
And for 𝑥 < 0 we have:
exp(𝑥) =

1
∈ (0,1)
exp(−𝑥)

Therefore, because we know lim exp(𝑥) = +∞, then ∃ℎ ∈ ℝ s
...
∀𝑥 > ℎ , we have
𝑥→+∞

exp(ℎ) ≥ 𝑦 ⇒ exp(𝑥) > 𝑦
...

1

We also know that since exp(𝑥) = exp(−𝑥) → 0 as 𝑥 → −∞
...
t
...
In particular, exp(ℎ − 1) < 𝑦
...
t
...



Page 62 of 83

Definition:

ln: (0, +∞) → ℝ is a function inverse to exp
...
7182818284590 …

Proposition: For 𝑥 ∈ ℚ, we have exp(𝑥) = 𝑒 𝑥
Definition:

e
...
exp(2) = exp(1 + 1) = exp(1) exp(1) = 𝑒 2
For 𝑎 > 0, we define
𝑎 𝑥 = exp(𝑥 𝑙𝑛 𝑎)
For 𝑎 ≠ 1 this is a continuous bijection ℝ → (0, +∞)
...
Then
𝑓 −1 is also continuous
...

-

Attempt Homework 8

Page 63 of 83

Chapter 5: Differentiation
It is a technique used to determine the slope of a curve
...
The slope is 𝑎
...

Step 2: The slope at 𝑦 = 𝑓(𝑥) at 𝑥 = 𝑐
...

Let 𝐿′ be a straight line passing through (𝑐, 𝑓(𝑐)) and (𝑐 + ℎ, 𝑓(𝑐 + ℎ))
...

Page 64 of 83

Differentiable Functions
Definition:

Suppose 𝑆 ⊂ ℝ, and 𝑓: 𝑆 → ℝ
...
We say that 𝑓 is differentiable at 𝑐 if the following limit
exists:
lim

ℎ→0

𝑓(𝑐 + ℎ) − 𝑓(𝑐)
𝑓(𝑥) − 𝑓(𝑐)
= lim
𝑥→𝑐

𝑥− 𝑐

This limit is then called the derivative of 𝑓 at 𝑐
...


Examples:
1) 𝑓(𝑥) = 𝑘,
𝑓 ′ (𝑐) = lim

ℎ→0

𝑘− 𝑘
=0


2) 𝑓(𝑥) = 𝑥,
𝑐+ℎ− 𝑐
ℎ→0

= lim 1

𝑓 ′ (𝑐) = lim

ℎ→0

=1
3) 𝑓(𝑥) = 𝑥 2 ,
𝑓

′ (𝑐)

(𝑐 + ℎ)2 − 𝑐 2
= lim
ℎ→0

𝑐 2 + 2𝑐ℎ + ℎ2 − 𝑐 2
= lim
ℎ→0

ℎ(2𝑐 + ℎ)
= lim
ℎ→0

= lim 2𝑐 + ℎ = 2𝑐
ℎ→0

Therefore 𝑓 ′ (𝑥) = 2𝑥
...

𝑥
6) 𝑓(𝑥) = |𝑥| = {−𝑥 ,,

Let 𝑐 = 0
...


Page 66 of 83

We therefore now get the general idea that 𝑓 is differentiable at point 𝑐 if it is well
approximated by a linear function near 𝑐
...
t
...

ℎ→0

Then, 𝑚 = 𝑓′(𝑐)
...

𝑓(𝑐) + 𝑚ℎ is the 𝑦-coordinate defined by the linear function 𝐿 at 𝑥 = 𝑐 + ℎ and 𝑅(ℎ)ℎ is
the remainder (the discrepancy between 𝑓(𝑐) + 𝑚ℎ and 𝑓(𝑐 + ℎ))
...

ℎ→0

Proof:


Suppose ∃𝑓 ′ (𝑐) = lim

𝑓(𝑐+ℎ)−𝑓(𝑐)

because 𝑓 is differentiable at 𝑐
...

Proof:
Suppose 𝑓 is differentiable at 𝑐, then equation (∗) will hold
...




Theorem: Suppose 𝑓 and 𝑔 are differentiable at 𝑐
...

Therefore, 𝑓 + 𝑔 is differentiable and (𝑓 + 𝑔)′ (𝑐) = 𝑓 ′ (𝑐) + 𝑔′(𝑐)
...
𝑔)(𝑐 + ℎ) = 𝑓(𝑐 + ℎ)𝑔(𝑐 + ℎ)
= 𝑓(𝑐)𝑔(𝑐) + ℎ[𝑓(𝑐)𝑔′ (𝑐) + 𝑔(𝑐)𝑓 ′ (𝑐)]
+ ℎ[𝑓(𝑐)𝑅 𝑔 (ℎ) + 𝑔(𝑐)𝑅 𝑓 (ℎ) + 𝑓 ′ (𝑐)𝑔′ (𝑐)ℎ + 𝑓 ′ (𝑐)𝑅 𝑔 (ℎ)ℎ + 𝑔′ (𝑐)𝑅 𝑓 (ℎ)ℎ
+ 𝑅 𝑓 (ℎ)𝑅 𝑔 (ℎ)ℎ]
We have the linear function as 𝑓(𝑐)𝑔(𝑐) + ℎ[𝑓(𝑐)𝑔′ (𝑐) + 𝑔(𝑐)𝑓 ′ (𝑐)]
...
Since 𝑅 𝑔 (ℎ), 𝑅 𝑓 (ℎ), ℎ → 0 as ℎ → 0, then
the last bracket does indeed tend to zero
...
Thus, 𝑓
...
𝑔)

′ (𝑐)

= 𝑓

′ (𝑐)𝑔(𝑐)

+ 𝑔

′ (𝑐)𝑓(𝑐)
...




𝑓

Theorem: If 𝑓 and 𝑔 are differentiable at 𝑐, and 𝑔(𝑥) ≠ 0, then 𝑔 is also differentiable and:
𝑓 ′
𝑓 ′ 𝑔 − 𝑔′ 𝑓
( ) =
(𝑐)
𝑔
𝑔2

This is known as the Quotient Rule
...

𝑔



Page 69 of 83

Theorem: Suppose 𝑔 is differentiable at 𝑐 and 𝑓 is differentiable at 𝑔(𝑐), then 𝑓 ∘ 𝑔 is
differentiable at 𝑐
...

Proof of Chain Rule:
𝑔 is differentiable at

𝑐
...
Where

lim 𝑅 𝑔 (ℎ) = 0
...
Therefore, 𝑓(𝑔(𝑐) + 𝑘) = 𝑓(𝑔(𝑐)) + 𝑓 ′ (𝑔(𝑐))𝑘 + 𝑅 𝑓 (𝑘)𝑘,
where lim 𝑅 𝑓 (𝑘) = 0
...
So as ℎ → 0, then note that 𝑔′(𝑐)ℎ and 𝑅 𝑔 (ℎ)ℎ both tend to zero
as well, so 𝑘 → 0 as ℎ → 0
...
Therefore, we have that
𝑓 ∘ 𝑔 is differentiable at 𝑐 and (𝑓 ∘ 𝑔)′ (𝑐) = 𝑓 ′ (𝑔(𝑐))𝑔′(𝑐)
...
𝑓2
...
𝑓2 … 𝑓 𝑛 + 𝑓1
...
+𝑓1
...
Therefore,
(𝑥 𝑛 )′ = (𝑥
...
𝑥 … 𝑥 ′

There are 𝑛 terms

= 𝑛𝑥 𝑛−1
2) 𝑥 𝑛 , 𝑛 = 0
...
Therefore,
(𝑥 𝑛 )′ = (

1 ′
−1
) = −𝑛 2 (−𝑛)𝑥 −𝑛−1 = 𝑛𝑥 2𝑛−𝑛−1 = 𝑛𝑥 𝑛−1
(𝑥 )
𝑥 −𝑛

4) √ 𝑥 , 𝑥 > 0
...
√ 𝑥) = (𝑥)′ = 1




(√ 𝑥
...
(√ 𝑥)



Therefore, equate the two RHS together to get:


(√ 𝑥) =
=

1
2√ 𝑥
1 −1
𝑥 2
2

5) 𝑓(𝑥) = 𝑒 𝑥 = exp(𝑥)
𝑑 𝑥
𝑓(𝑐 + ℎ) − 𝑓(𝑐)
𝑒 𝑐+ℎ − 𝑒 𝑐
𝑒 |
= lim
= lim
ℎ→0
ℎ→0
𝑑𝑥


𝑥=𝑐
= lim

ℎ→0

𝑒 ℎ −1
ℎ→0 ℎ

Note that lim

𝑒 𝑐 (𝑒 ℎ − 1)


𝑒ℎ − 1
ℎ→0

= 𝑒 𝑐
...
Then 𝑓 is differentiable at 𝑐 and 𝑓 ′ (𝑐) = 𝑚
...


Proof:
We have 𝐿 = 𝑝(𝑐) ≤ 𝑓(𝑐) ≤ 𝑞(𝑐) = 𝐿, so 𝑓(𝑐) = 𝐿
...
So lim+
ℎ→0

𝑓(𝑐+ℎ)−𝑓(𝑐)


= 𝑚 via the

sandwich theorem for limits
...
So lim−
ℎ→0

𝑓(𝑐+ℎ)−𝑓(𝑐)


= 𝑚

via the sandwich theorem for limits
...

⇒ (𝑒 𝑥 )′ = 𝑒 𝑥


Page 73 of 83

Theorem: Suppose 𝑓: 𝐴 → 𝐵 is a bijection, so ∃ an inverse function 𝑓 −1 : 𝐵 → 𝐴 s
...

𝑓 ∘ 𝑓 −1 (𝑥) = 𝑥

This is known as the Inverse Rule
...

Suppose 𝑓 is differentiable at 𝑎 and assume 𝑓 −1 is differentiable at 𝑏, then we can take:
𝑑 −1
𝑑
[𝑓 ∘ 𝑓(𝑥)] =
(𝑥) = 1
𝑑𝑥
𝑑𝑥
Let us try to differentiate LHS at 𝑥 = 𝑎:
𝑑 −1
[𝑓 ∘ 𝑓(𝑥)]|
= (𝑓 −1 )′ 𝑓(𝑎) ∙ 𝑓 ′ (𝑎)
𝑑𝑥
𝑥=𝑎
= (𝑓 −1 )′ 𝑏 ∙ 𝑓 ′ (𝑎)
Thus the derivative of the inverse function at point 𝑏:
(𝑓 −1 )′ (𝑏) =

1
, 𝑓′(𝑎) ≠ 0
𝑓 ′ (𝑎)

-

Attempt Homework 9

Page 74 of 83

Example: 𝑓(𝑥) = ln 𝑥
𝑦 = ln 𝑥 ⇔ 𝑥 = 𝑒 𝑦
Take 𝑎 ∈ ℝ and 𝑏 = 𝑒 𝑎 > 0
...

Example: Take 𝑓(𝑥) = 𝑥 𝑛 , 𝑛 ∈ ℝ , 𝑥 > 0
...


Definition:

maximum
Suppose 𝑆 ⊂ ℝ and 𝑓: 𝑆 → ℝ
...

𝑓(𝑥) ≥ 𝑓(𝑐)

Definition:

maximum
𝑓: 𝑆 → ℝ has a local {
} at 𝑐 ∈ 𝑆 if ∃𝛿 > 0 s
...

minimum
i
...


∀𝑥 ∈ (𝑐 − 𝛿, 𝑐 + 𝛿) ⇒ {

𝑓(𝑥) ≤ 𝑓(𝑐)
}
𝑓(𝑥) ≥ 𝑓(𝑐)

A local extremum is a local maximum or minimum
...

If 𝑓: [𝑎, 𝑏] → ℝ, then in order to find a global maximum (or minimum) of 𝑓, we need to
find all points at a local maximum (or minimum) and compare values at these points with
𝑓(𝑎) and 𝑓(𝑏)
...


Proof:
Suppose 𝑐 is a local minimum (i
...
∃𝛿 > 0 s
...

We have:
𝑓 ′ (𝑐) = lim+
ℎ→0

𝑓(𝑐 + ℎ) − 𝑓(𝑐)
𝑓(𝑐 + ℎ) − 𝑓(𝑐)
= lim−
ℎ→0






Note that if |ℎ| < 𝛿, we have that 𝑓(𝑐 + ℎ) − 𝑓(𝑐) ≥ 0 because it is a local minimum
...



Remark: 𝑓(𝑥) = |𝑥| , 𝑥 = 0 is a
local minimum but 𝑓′(0) ≠ 0
because 𝑓 is not differentiable at 0
...
Assume also that 𝑓(𝑏) = 𝑓(𝑎), then:
∃𝑐 ∈ (𝑎, 𝑏) s
...
𝑓 ′ (𝑐) = 0

This is known as Rolle’s Theorem
...


Case 1:
𝑓(𝑥max ) = 𝑓(𝑥min )
Then 𝑓(𝑥) is constant and 𝑓 ′ (𝑥) = 0 ∀𝑥
...

Case 2:
𝑓(𝑥min ) < 𝑓(𝑥max )
Then at least one of the numbers 𝑓(𝑥max ) or 𝑓(𝑥min ) is different from 𝑓(𝑎) = 𝑓(𝑏)
...




Page 78 of 83

Theorem: Suppose 𝑓: [𝑎, 𝑏] → ℝ is continuous and differentiable, then
∃𝑐 ∈ (𝑎, 𝑏) s
...
𝑓 ′ (𝑐) =

𝑓(𝑏)−𝑓(𝑎)
𝑏−𝑎

This is known as the Mean Value Theorem
...
We want to find 𝑓 ′ (𝑥) = 𝑚
...

Also, we have
𝑔(𝑎) = 𝑓(𝑎) − 𝑚𝑎
= 𝑓(𝑎) − (

𝑓(𝑏) − 𝑓(𝑎)
) 𝑎
𝑏− 𝑎

𝑏𝑓(𝑎) − 𝑎𝑓(𝑎) − 𝑎𝑓(𝑏) + 𝑎𝑓(𝑎)
𝑏− 𝑎
𝑏𝑓(𝑎) − 𝑎𝑓(𝑏)
=
𝑏− 𝑎
=

We also have:

𝑔(𝑏) = 𝑓(𝑏) − (

𝑓(𝑏) − 𝑓(𝑎)
) 𝑏
𝑏− 𝑎

𝑏𝑓(𝑏) − 𝑎𝑓(𝑏) − 𝑏𝑓(𝑏) + 𝑏𝑓(𝑎)
𝑏− 𝑎
𝑏𝑓(𝑎) − 𝑎𝑓(𝑏)
=
𝑏− 𝑎
=

Thus, 𝑔(𝑎) = 𝑔(𝑏) so by the Rolle’s theorem, ∃𝑐 ∈ (𝑎, 𝑏) s
...
𝑔′ (𝑐) = 0
...




Page 79 of 83

Corollary: 𝑓: [𝑎, 𝑏] → ℝ is continuous and differentiable
...


𝑓′(𝑥) > 0 ∀𝑥 ∈ (𝑎, 𝑏) ⇒ 𝑓 is strictly increasing

ii
...


𝑓 ′ (𝑥) = 0 ∀𝑥 ∈ (𝑎, 𝑏) ⇒ 𝑓 is constant

Proof:
Suppose 𝑎 ≤ 𝑥1 ≤ 𝑥2 ≤ 𝑏
...
Then, by the mean value theorem, ∃𝑐 ∈ (𝑎, 𝑏) s
...


Therefore, by considering the signs of the numerator and denominator of 𝑚, we have:
i
...


𝑚 < 0 , 𝑓(𝑥2 ) < 𝑓(𝑥1 )

iii
...
e
...
How do we
determine whether 𝑐 is a local maximum or local minimum or neither?
Theorem: Suppose 𝑐 ∈ [𝑎, 𝑏], 𝑓 is continuous on [𝑎, 𝑏]
...


Suppose ∃𝛿 > 0 s
...
𝑓 ′ (𝑥) > 0 , ∀𝑥 ∈ (𝑐 − 𝛿, 𝑐) and 𝑓 ′ (𝑥) < 0 , ∀𝑥 ∈ (𝑐, 𝑐 + 𝛿)
...


ii
...
t
...

Then 𝑐 is a local minimum
...
Thus, 𝑥 ∈ (𝑐 − 𝛿, 𝑐) we have 𝑓(𝑥) < 𝑓(𝑐)
...
Thus, 𝑥 ∈ (𝑐, 𝑐 + 𝛿) we have 𝑓(𝑥) < 𝑓(𝑐)
...




The proof of (ii) is the same
...


Theorem: Suppose 𝑓: [𝑎, 𝑏] → ℝ is twice continuously differentiable
...
e
...

Suppose 𝑐 ∈ (𝑎, 𝑏) is a critical point (𝑓 ′ (𝑐) = 0)
...


𝑓 ′′ (𝑐) > 0 ⇒ 𝑐 is a local minimum

ii
...


Proof of (i):
We have 𝑓 ′′ (𝑐) > 0
...
t
...

But 𝑓 ′′ (𝑥) = (𝑓 ′ )′(𝑥), thus 𝑓′(𝑥) is strictly increasing on (𝑐 − 𝛿, 𝑐 + 𝛿)
...

Thus, 𝑐 is a local minimum by the previous theorem
...

Suppose 𝑐 is a non-critical point and 𝑓 ′ (𝑐) = 𝑓 ′′ (𝑐) … = 𝑓 (𝑛−1) (𝑐) = 0, and 𝑓 (𝑛) (𝑐) ≠ 0,
then:
i
...


If 𝑛 is even, then 𝑓 (𝑛) (𝑐) > 0, then 𝑐 is a local minimum

iii
...

Consider:
(i)

𝑛 is odd
𝑦= 𝑥𝑛

∴ 0 is not a local extremum

(ii)

𝑛 is even

𝑦= 𝑥𝑛

∴ 0 is a local minimum,
𝑓 (𝑛) (0) > 0
Page 82 of 83

Idea of Proof:
Consider the cases 𝑛 = 3, then 𝑓 ′ (0) = 𝑓 ′′ (0) = 0 and 𝑓′′′(0) ≠ 0, say 𝑓 ′′′ (𝑐) > 0
...
We should
now get a general idea of how to prove (i), (ii) & (iii)
...
t
...
Then ∃𝛿 > 0 s
...
𝑓 maps the interval (𝑥 𝑜 − 𝛿, 𝑥 𝑜 + 𝛿) bijectively onto an
open interval 𝐼 ⊂ ℝ
...

The inverse function 𝑓 −1 : 𝐼 → (𝑥 𝑜 − 𝛿, 𝑥 𝑜 + 𝛿) is differentiable at 𝑦 𝑜 and (𝑓 −1 )′ (𝑦 𝑜 ) =
1
𝑓 ′ (𝑥 𝑜 )

, where 𝑓 ′ (𝑥 𝑜 ) ≠ 0
Title: Analysis 1
Description: Introduction to first-year mathematical analysis. Topics covered: - Basics of proofs - Properties of Limits - Definition of Convergence - Bolzano-Weierstrass Theorem - Sequences and their properties - Series and their properties - Functions and their properties - Continuity - Intermediate Value Theorem and its applications - Differentiation - Mean Value Theorem and its applications