Infinite Sequences

These notes cover sequences. They assume familiarity with sequences.

  1. Properties of Sequences
    1. Convergence
      1. Historical Background
      2. Formal Definition
    2. Limit Definition
    3. Uniqueness of Limits
    4. Bounded Sequences
  2. Monotonicity
  3. Subsequences
  4. Sandwich Theorem
  5. The Bolzano-Weierstrass Theorem

Properties of Sequences

A few examples of sequences:

example. Nn=1∞ 1n2=(1,14,19,116,â€Ļ){\Sq{n=1}{\infty}\dfrac{1}{n^2}=\ar{1,\frac{1}{4},\frac{1}{9},\frac{1}{16},\ldots}}

example. Nn=1∞ cos⁥(nĪ€2)=(1,0,−1,0,1,0,−1,0,1,0,−1,â€Ļ){\Sq{n=1}{\infty}{\cos \ar{\dfrac{n\pi}{2}}}=\ar{1,0,-1,0,1,0,-1,0,1,0,-1,\ldots}}

Convergence

Historical Background

To better understand convergence — and ultimately, limits — it's best to put these ideas in context. The notion of convergence is all about control. What we want to be able to say, is: Given two real numbers n{n} and L,{L,} after some point N,{N,} the distance between n{n} and L{L} is so small, even if n≠L.{n \neq L.} Why do we need this control? Think about the sequence (1/n).{(1/n).} As n{n} gets extremely large, (1/n){(1/n)} will get smaller and smaller. At some point, we get some number 0.0â€Ļ01,{0.0 \ldots 01,} comprising so many zeros that we'd sooner die writing them before ever seeing the 1. We want to be able to say that (1/n){(1/n)} converges to 0.{0.} That is, as x{x} gets very large (formally, "approaches infinity"), (1/n){(1/n)} converges to 0.

It seems intuitive to say that (1/n){(1/n)} converges to infinity, since we can see that (1/n){(1/n)} gets smaller and smaller as n{n} gets larger and larger. But in mathematics, a thousand experiments do not make a theorem, nor do a million. The only way to firmly establish that (1/n){(1/n)} converges to 0 is to prove it.

Now, why do we need sequences to converge? One reason is that without the notion of convergence, we'd never know with certainty if our approximations of functions are even remotely correct. We can never completely express 2,{\sqrt{2},} but we can approximate it with a function, assuming the function converges to 2.{\sqrt{2}.} But there are other reasons. Some mathematical statements are completely false if we don't have the notion of convergence. The sum ∑n=0∞xn{\sum_{n=0}^{\infty} x^n} has the closed form fomula 11−x.{\frac{1}{1-x}.} Without the notion of convergence, we'd have 1+2+4+8+â€Ļ=−1,{1+2+4+8+\ldots=-1,} which is most definitely not what we want. Another reason is Zeno's Paradox: Suppose a{a} and b{b} are separated by 10 meters. To get from a{a} to b,{b,} we must walk 5 meters. Then we must walk 2.5 meters. Then we must walk another 1.25, and so on. We never get to b!{b!} We can solve these problems by defining the notion of convergence.1.

When Newton and Leibniz began working on calculus, there was no notion of convergence. Instead, there were infinitesimals — numbers that lie between any two real numbers, but aren't real numbers themselves. For example, an infinitesimal lies between 0.99999â€Ļ{0.99999\ldots} and 1, but is not a real number. Almost immediately, we can see how this is difficult to think about. Mathematicians like Newton and Leibniz just "knew" when an infinitesimal would appear and disappear in their derivations. It was intuition that guided them, rather than some rigorous notion. The definition of convergence is aimed at making this idea more rigorous.

Formal Definition

If we think about the definition of a limit, the key requirement is "small." But the word "small" varies in meaning, depending on the listener and context. A thousand U.S. dollars might be small to a billionaire, but it certainly isn't small to those in abject poverty. Eight miles might seem small to marathon runners, but not to those who only run once every few years. Accordingly, we need a definition that appeases to everyone's idea of "small." Let's begin by defining some notation.

distance. Given x,y∈R,{x,y \in \reals,} we define the notation d(y,x):=âˆŖy−xâˆŖ.{\d(y,x):=\abs{y-x}.}

closeness. Given x,y,Îĩ∈R{x,y,\varepsilon \in \reals} and Îĩ>0,{\varepsilon \gt 0,} we say that x{x} and y{y} are close if and only if d(x,y)≤Îĩ.{\d(x,y) \le \varepsilon.} If x{x} and y{y} are close, we write x≈Îĩy.{x \ec y.} Otherwise, we write xĖ¸â‰ˆÎĩy.{x \not\ec y.}

convergence. Let Îĩ>0{\varepsilon \gt 0} be a real number, let L{L} be a real number, and let ( an )n=N∞{\seq{a_n}{n=N}{\infty}} be a sequence of real numbers. We write ( an )n=N∞≈ÎĩL{\seq{a_n}{n=N}{\infty} \ec L} iff for all nâ‰ĨN,{n \ge N,} it is true that an≈ÎĩL.{a_n \ec L.} We say that a sequence ( an )n=m∞{\seq{a_n}{n=m}{\infty}} is eventually close to L{L} iff there exists an Nâ‰Ĩm{N \ge m} such that ( an )n=N∞≈ÎĩL.{\seq{a_n}{n=N}{\infty} \ec L.} We say that ( an )n=m∞{\seq{a_n}{n=m}{\infty}} converges to L{L} iff ( an )n=m∞{\seq{a_n}{n=m}{\infty}} is eventually close to L.{L.}

It's helpful to view different implementations of this definition. Below is the standard definition found in most textbooks:

convergence. We say that a sequence (an){(a_n)} converges to a number L∈R{L \in \reals} if, and only if, for every Îĩ>0,{\varepsilon \gt 0,} there exists an index N{N} such that, given any index nâ‰ĨN,{n \ge N,} the relation âˆŖan−LâˆŖ≤Îĩ{\abs{a_n - L} \le \varepsilon} is true.

Let's be very clear about what each of these variables are. What is (an)?{(a_n)?} (an){(a_n)} is a sequence. It takes natural number indices, and pairs them up with real numbers — it establishes a specific ordering. What is L?{L?} L{L} is a real number. It is not a term of the sequence (otherwise we wouldn't need to look for a limit in the first place). What is n?{n?} n{n} is an index. What is N?{N?} N{N} is also an index. What is Îĩ?{\varepsilon?} Îĩ{\varepsilon} is a real number. What is an?{a_n?} an{a_n} is a term of the sequence. What does it mean when (an){(a_n)} converges? That there's a threshold index N{N} where all sequence's terms (starting at N{N}) map are real numbers Îĩ{\varepsilon}-close to L.{L.} So, what does this tell us? If we want to prove that a sequence is divergent, we want to pick a Îĩ{\varepsilon} so small that an−L>Îĩ.{a_n-L \gt \varepsilon.}

example. The sequence S=( (−1)n )n=0∞{S=\seq{(-1)^n}{n=0}{\infty}} is divergent. Assume S{S} is convergent. If S{S} is convergent, then S{S} converges to some L∈R.{L \in \reals.} That is, the terms of S{S} get arbitrarily close to L.{L.} Fix Îĩ=1.{\varepsilon=1.} Then we can choose an N∈N{N \in \nat} such that:

∀n∈N,  nâ‰ĨN  ⟹  âˆŖ(−1)n−LâˆŖ<1. \forall n \in \nat,~~n \ge N \implies \abs{(-1)^n - L} \lt 1.

Choose n=N.{n = N.} Then âˆŖ(−1)N−L<1âˆŖ{\abs{(-1)^N - L \lt 1}} and âˆŖ(−1)N+1−LâˆŖ<1,{\abs{(-1)^{N+1}-L}\lt 1,} since the former will yield 1, and the latter will yield -1 (the only two terms of S{S}). It follows that âˆŖ1−LâˆŖ<1,{\abs{1-L} \lt 1,} and −1−L<1.{-1-L \lt 1.} From the definition of the absolute value, we have

(a)   L−1<1<L+1(b)   L−1<−1<L+1 \eqs{ &(a)~~~L-1 \lt 1 \lt L + 1 \\ &(b)~~~L-1 \lt -1 \lt L+1 }

Transpose −1{-1} to (a),{(a),} and we get 0<L.{0 \lt L.} Transpose 1{1} to (b),{(b),} and we get L<0.{L \lt 0.} We have a contradiction. L{L} cannot both be less than 0 and greater than 0. Our initial assumption is false, so S{S} must be divergent.

Limit Definition

definition. Let L{L} be a real number, and let ( an )n=N∞{\seq{a_n}{n=N}{\infty}} be a real sequence starting at index N.{N.} If an{a_n} converges to L,{L,} we call L{L} the limit of an,{a_n,} write lim⁡n→∞an=L,{\ll{n}{\infty}a_n = L,} and say that an{a_n} is convergent. If an{a_n} does not converge to any real number, we write ∄L[lim⁡n→∞an=L],{\lnex L[\ll{n}{\infty}a_n=L],} and say that an{a_n} is divergent.

Uniqueness of Limits

limit uniqueness theorem. Let ( an )n=m∞{\seq{a_n}{n=m}{\infty}} be a convergent real sequence and let A,B∈R.{A,B \in \reals.} It follows that lim⁡n→∞an=A{\ll{n}{\infty}a_n = A} and lim⁡n→∞an=B{\ll{n}{\infty} a_n = B} if, and only if, A=B.{A = B.} That is,

[(lim⁡n→∞an=A)∧(lim⁡n→∞an=B)]⇔(A=B). [(\ll{n}{\infty}a_n = A) \land (\ll{n}{\infty}a_n = B)] \iff (A = B).

proof. Suppose lim⁥n→∞( an )n=m∞=A{\ll{n}{\infty}\seq{a_n}{n=m}{\infty}=A} and lim⁥n→∞( an )n=m∞=B{\ll{n}{\infty}\seq{a_n}{n=m}{\infty}=B} and A≠B.{A \neq B.} Since A≠B,{A \neq B,} let 0<Îĩ=A−B3.{0 \lt \varepsilon = \frac{A-B}{3}.} Because lim⁥n→∞( an )n=m∞=A,{\ll{n}{\infty}\seq{a_n}{n=m}{\infty}=A,} it follows that lim⁥n→∞( an )n=m∞≈ÎĩL{\ll{n}{\infty}\seq{a_n}{n=m}{\infty} \ec L} by definition. This implies that there exists an Nâ‰Ĩm{N \ge m} such that d(an,A)≤Îĩ{\d(a_n,A) \le \varepsilon} for all nâ‰ĨN.{n \ge N.} Likewise, since lim⁥n→∞( an )n=m∞=B,{\ll{n}{\infty}\seq{a_n}{n=m}{\infty}=B,} it follows that that there exists an Mâ‰Ĩm{M \ge m} such that d(an,B)≤Îĩ{\d(a_n,B) \le \varepsilon} for all nâ‰ĨM.{n \ge M.} If n=max⁥{ N,M },{n = \max\set{N,M},} then d(an,A)≤Îĩ{\d(a_n,A) \le \varepsilon} and d(an,B)≤Îĩ.{\d(a_n,B) \le \varepsilon.} Given that Îĩ=a−b3,{\varepsilon = \frac{a-b}{3},} we have âˆŖA−BâˆŖ≤2âˆŖA−BâˆŖ3.{\abs{A-B} \le 2 \frac{\abs{A-B}}{3}.} This condtradicts the fact that âˆŖA−BâˆŖ>0,{\abs{A-B} \gt 0,} per the definition of convergence. Hence, it cannot be true that a sequence of reals an{a_n} converges to A{A} and B{B} with A≠B.{A \neq B.} Therefore, lim⁥n→∞an=A{\ll{n}{\infty}a_n = A} and lim⁥n→∞an=B{\ll{n}{\infty}a_n = B} if, and only if, A=B.{A = B.} ■{\bs}

The limit uniqueness theorem has a particularly useful implication.

corollary. All real sequences have, at most, one limit L∈R.{L \in \reals.}

This stems from the fact that a real sequence will either diverge (no limit exists) or converge (a limit exists). And by the limit uniqueness theorem, if a real sequence does converge, it has exactly one limit L.{L.}

theorem. If a sequence (an)n=m∞{(a_n)_{n=m}^{\infty}} is convergent, then (an)n=m∞{(a_n)_{n=m}^{\infty}} is a Cauchy sequence.

proof. First, we know that every Cauchy sequence converges. Suppose an{a_n} converges to L∈R.{L \in \reals.} Then there exists an N∈N{N \in \nat} such that, for all nâ‰ĨN{n \ge N} and Îĩ>0,{\varepsilon \gt 0,} we have an−L≤Îĩ.{a_n - L \le \varepsilon.}

Bounded Sequences

The sequence ( (−1)n )n=1∞{\seq{(-1)^n}{n=1}{\infty}} isn't convergent, but it is bounded.

sequence bounded above. A sequence (an)n=m∞{(a_n)_{n=m}^{\infty}} is bounded above if, and only if, there exists a constant C∈R{C \in \reals} such that, for all indices n,{n,} the relation an≤C{a_n \le C} is true. To denote the fact that (an)n=m∞{(a_n)_{n=m}^{\infty}} is bounded, we write 01(an)n=m∞{^1_0{(a_n)_{n=m}^{\infty}}}

sequence bounded below. A sequence (an)n=m∞{(a_n)_{n=m}^{\infty}} is bounded below if, and only if, there exists a constant C∈R{C \in \reals} such that, for all indices n,{n,} the relation C≤an{C \le a_n} is true. To denote the fact that (an)n=m∞{(a_n)_{n=m}^{\infty}} is bounded below, we write 10(an)n=m∞{^0_1{(a_n)_{n=m}^{\infty}}}

bounded sequence. A sequence (an)n=m∞{(a_n)_{n=m}^{\infty}} is bounded if, and only if, S{S} is bounded above and bounded below. To denote the fact that (an)n=m∞{(a_n)_{n=m}^{\infty}} is bounded, we write 11(an)n=m∞{^1_1{(a_n)_{n=m}^{\infty}}}

Monotonicity

monotonicity. Let an{a_n} be a sequence. For all n∈N:{n \in \nat:} We say that an{a_n} is strictly increasing iff an<an+1;{a_{n} \lt a_{n+1};} that an{a_n} is increasing iff an≤an+1;{a_n \le a_{n+1};} that an{a_n} is strictly decreasing iff an>an+1;{a_n \gt a_{n+1};} that an{a_n} is decreasing iff anâ‰Ĩan+1.{a_n \ge a_{n+1}.} If an{a_n} is increasing or decreasing or both, then an{a_n} is monotonic. If an{a_n} is neither increasing nor decreasing, we say that an{a_n} is non-monotonic.

Subsequences

subsequence. Let (a)n=1∞{(a)_{n=1}^{\infty}} and (b)n=1∞{(b)_{n=1}^{\infty}} be sequences in R.{\reals.} We say that (b){(b)} is a subsequence of (a){(a)} if there exists a strictly increasing function f:N→N{f: \nat \to \nat} such that, for every n∈N,{n \in \nat,} it follows that bn=xi,{b_n = x_i,} where i=f(n).{i = f(n).}

Sandwich Theorem

Also known as the Squeeze Theorem, the Sandwich Theorem allows us to determine if a given sequence converges, using sequences that we already know as convergent.

sandwich theorem. Let (an){(a_n)} and (bn){(b_n)} be sequences, with lim⁡n→∞(an)=L{\ll{n}{\infty}(a_n)=L} and lim⁡n→∞(bn)=L.{\ll{n}{\infty}(b_n)=L.} If an≤cn≤bn,{a_n \le c_n \le b_n,} then lim⁡n→∞(cn)=L.{\ll{n}{\infty}(c_n)=L.}

The Bolzano-Weierstrass Theorem

A key theorem in real analysis is the Bolzano-Weierstrass Theorem. In short: Every bounded sequence of real numbers contains a convergent subsequence.

bolzano-weierstrass theorem. Let (an){(a_n)} be a sequence in R.{\reals.} If (an){(a_n)} is bounded, then (an){(a_n)} has a convergent subsequence.

Footnotes

  1. Whether arguments built on the definition of convergence solve Zeno's paradox is not a settled matter, although most mathematicians and philosophers have settled on a theory based on convergence, called the standard solution. Doubts remain because of the solution's implications: Finite distances can contain an infinite number of points; a whole may be smaller than one of its parts; the sum of an infinite series can be finite; for each place along a line, there may not be a next place; and several others. Note further that the standard solution to Zeno's paradox is based on axioms that we decided on. The solution can be broken down by attacking the consistency of those very axioms. This is a topic left to the notes on logic. ↩