Originally Posted by

**Grandad** Hi Kalashnikov

Talking about convergence of sequences is a bit like doing differentiation from first principles. (Did you understand that when you first did it?) It's all about what we mean by 'limiting values'.

When we start to learn about differentiation we say things like:

$\displaystyle \frac{dy}{dx}=\lim_{\delta x \to 0}\frac{\delta y}{\delta x}$

So what do all these symbols mean?

Well, they mean that you can make value of $\displaystyle \frac{\delta y}{\delta x}$ (which represents the gradient of a chord) as close to $\displaystyle \frac{dy}{dx}$ (the gradient of the tangent) as you like by taking $\displaystyle \delta x$ close enough to zero.

It's important that you understand that last sentence, so **read it again!**

Let me explain it a bit more.

You can't actually put $\displaystyle \delta x$ equal to zero in an expression like $\displaystyle \frac{\delta y}{\delta x}$ to find the gradient of a tangent, because that would violate the First Commandment of Arithmetic: Thou shalt not divide by zero. So you have to sneak up on the value by letting $\displaystyle \delta z$ get ever closer to zero without actually ever getting there.

And it's like that with convergence of sequences, except that we're now talking about 'infinity' (whatever that is) rather than zero. So, if we want to say that the sequence $\displaystyle a_1, a_2,...$ converges to the limit $\displaystyle a$, we would write:

$\displaystyle \lim_{n \to \infty}a_n=a$

and this means something very similar to the differentiation thing, which is:* You can make the value of $\displaystyle {a_n}$ as close to $\displaystyle a$ as you like, by taking $\displaystyle n$ large enough.*

Again, that sentence is really important, so **read it again!**

Let me give you a simple example to show you what I mean. Look at the successive sums of the series:

$\displaystyle \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...$

The successive sums are:

$\displaystyle \frac{1}{2}, \frac{3}{4}, \frac{7}{8}, \frac{15}{16},...$

and it's pretty obvious that this sequence of numbers *converges* to the number 2. If we want to write this in formal notation, we should say:

Let $\displaystyle S_n$ be the sum of the first n terms of the series $\displaystyle \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...$. Then $\displaystyle \lim_{n \to \infty}S_n=2$

So what does this mean, again? Right: *you can make $\displaystyle S_n$ as close to 2 as you like by taking $\displaystyle n$ large enough.* Can you actually make $\displaystyle S_n$ equal to 2? No. But you can get as close as you like. In other words, if someone challenges you to get within 0.000...(a million 0's)...01 of the answer 2, you could still do it by taking a value of $\displaystyle n$ big enough.

That's all it means: no more, no less.

So, how do you do you first question?

You need to show that you can get as close to $\displaystyle a+b$ as you like by taking $\displaystyle n$ large enough. So how close would you like to get? Within a small amount $\displaystyle \epsilon $, say? OK, then you can argue something like this:

By definition, I know that if I take $\displaystyle n$ large enough I can make the difference between $\displaystyle a_n$ and $\displaystyle a$ less than $\displaystyle \frac{1}{2}\epsilon$. Suppose this value is $\displaystyle n = r$. And I can make the difference between $\displaystyle b_n$ and $\displaystyle b$ less than $\displaystyle \frac{1}{2}\epsilon$ too, by taking $\displaystyle n$ large enough, say $\displaystyle n = s$. Then if I choose the value of $\displaystyle n$ to be the larger of $\displaystyle r$ and $\displaystyle s$, then the difference between $\displaystyle a_n +b_n$ and $\displaystyle a+b$ will be less than $\displaystyle \epsilon$. And that completes the proof.

Have a go at the second one in a similar way.

Grandad