1. ## convergence

I'm actually completely lost with convergence of sequences.. Let's just say our lecturer isn't the best lol.

Here's one of the easier questions I need to do.

Given that the sequences $\displaystyle a_1, a_2,...$ and $\displaystyle b_1, b_2,...$ converge to limits $\displaystyle a$ and $\displaystyle b$ respectively, show that:

(a) The sequence $\displaystyle c_n = a_n + b_n$ converges to $\displaystyle a + b$;

(b) If $\displaystyle a > 0$, the sequence $\displaystyle d_n = \frac{1}{a_n}$ converges to $\displaystyle \frac{1}{a}$.

How do I go about showing something converges? This is all completely new to me..

2. Hi there

Originally Posted by Kalashnikov
I'm actually completely lost with convergence of sequences.. Let's just say our lecturer isn't the best lol.

Here's one of the easier questions I need to do.

Given that the sequences $\displaystyle a_1, a_2,...$ and $\displaystyle b_1, b_2,...$ converge to limits $\displaystyle a$ and $\displaystyle b$ respectively, show that:

(a) The sequence $\displaystyle c_n = a_n + b_n$ converges to $\displaystyle a + b$;
This is pretty easy,
you want to show that

$\displaystyle lim_{n \to \infty} (a_n + b_n) = a + b$

$\displaystyle \epsilon > 0 \Rightarrow \frac{\epsilon}{2}>0$

$\displaystyle a_n \mbox{ and }b_n$ converge

$\displaystyle \Rightarrow \exists N_1 , N_2 \in \mathbb{N} :$

$\displaystyle |a_n - a| < \frac{\epsilon}{2}, n \ge N_1$

$\displaystyle |b_n - b| < \frac{\epsilon}{2}, n \ge N_2$

$\displaystyle \forall n \ge N := max(N_1, N_2)$ it is

$\displaystyle |(a_n+b_n) - (a+b)| \le |a_n - a| + |b_n - b| < \frac{\epsilon}{2} +\frac{\epsilon}{2} = \epsilon$

Do you understand?

Rapha

3. Hello again.

Originally Posted by Kalashnikov
(b) If $\displaystyle a > 0$, the sequence $\displaystyle d_n = \frac{1}{a_n}$ converges to $\displaystyle \frac{1}{a}$.

$\displaystyle a \not= 0 \Rightarrow |a| / 2 > 0$

$\displaystyle \Rightarrow \exists n_0 in \mathbb{N} : |a_n - a| < \frac{|a|}{2} \forall n \ge n_0$

$\displaystyle \Rightarrow |a_n| \ge |a| /2 , a_n \not=0 \mbox{ if } n \ge n_0$

Let $\displaystyle \epsilon > 0 \exists N_1 \mathbb{N} :$

$\displaystyle |a_n - a| < \frac{\epsilon |a|^2}{2} \forall n \ge N_1$

then

$\displaystyle n \ge N := max(n_0,N_1)$

$\displaystyle |\frac{1}{a_n}-\frac{1}{a}| = \frac{1}{|a_n||a|} * |a-a_n| < \frac{2}{|a|^2}*\frac{\epsilon}{2} = \epsilon$

$\displaystyle \Rightarrow lim(1 / a_n) = 1/a$

Regards,
Rapha

4. ## Convergence and Limits

Hi Kalashnikov

Originally Posted by Kalashnikov
I'm actually completely lost with convergence of sequences.. Let's just say our lecturer isn't the best lol.

Here's one of the easier questions I need to do.

Given that the sequences $\displaystyle a_1, a_2,...$ and $\displaystyle b_1, b_2,...$ converge to limits $\displaystyle a$ and $\displaystyle b$ respectively, show that:

(a) The sequence $\displaystyle c_n = a_n + b_n$ converges to $\displaystyle a + b$;

(b) If $\displaystyle a > 0$, the sequence $\displaystyle d_n = \frac{1}{a_n}$ converges to $\displaystyle \frac{1}{a}$.

How do I go about showing something converges? This is all completely new to me..
Talking about convergence of sequences is a bit like doing differentiation from first principles. (Did you understand that when you first did it?) It's all about what we mean by 'limiting values'.

When we start to learn about differentiation we say things like:

$\displaystyle \frac{dy}{dx}=\lim_{\delta x \to 0}\frac{\delta y}{\delta x}$

So what do all these symbols mean?

Well, they mean that you can make value of $\displaystyle \frac{\delta y}{\delta x}$ (which represents the gradient of a chord) as close to $\displaystyle \frac{dy}{dx}$ (the gradient of the tangent) as you like by taking $\displaystyle \delta x$ close enough to zero.

It's important that you understand that last sentence, so read it again!

Let me explain it a bit more.

You can't actually put $\displaystyle \delta x$ equal to zero in an expression like $\displaystyle \frac{\delta y}{\delta x}$ to find the gradient of a tangent, because that would violate the First Commandment of Arithmetic: Thou shalt not divide by zero. So you have to sneak up on the value by letting $\displaystyle \delta z$ get ever closer to zero without actually ever getting there.

And it's like that with convergence of sequences, except that we're now talking about 'infinity' (whatever that is) rather than zero. So, if we want to say that the sequence $\displaystyle a_1, a_2,...$ converges to the limit $\displaystyle a$, we would write:

$\displaystyle \lim_{n \to \infty}a_n=a$

and this means something very similar to the differentiation thing, which is: You can make the value of $\displaystyle {a_n}$ as close to $\displaystyle a$ as you like, by taking $\displaystyle n$ large enough.

Again, that sentence is really important, so read it again!

Let me give you a simple example to show you what I mean. Look at the successive sums of the series:

$\displaystyle \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...$

The successive sums are:

$\displaystyle \frac{1}{2}, \frac{3}{4}, \frac{7}{8}, \frac{15}{16},...$

and it's pretty obvious that this sequence of numbers converges to the number 2. If we want to write this in formal notation, we should say:

Let $\displaystyle S_n$ be the sum of the first n terms of the series $\displaystyle \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...$. Then $\displaystyle \lim_{n \to \infty}S_n=2$

So what does this mean, again? Right: you can make $\displaystyle S_n$ as close to 2 as you like by taking $\displaystyle n$ large enough. Can you actually make $\displaystyle S_n$ equal to 2? No. But you can get as close as you like. In other words, if someone challenges you to get within 0.000...(a million 0's)...01 of the answer 2, you could still do it by taking a value of $\displaystyle n$ big enough.

That's all it means: no more, no less.

So, how do you do you first question?

You need to show that you can get as close to $\displaystyle a+b$ as you like by taking $\displaystyle n$ large enough. So how close would you like to get? Within a small amount $\displaystyle \epsilon$, say? OK, then you can argue something like this:

By definition, I know that if I take $\displaystyle n$ large enough I can make the difference between $\displaystyle a_n$ and $\displaystyle a$ less than $\displaystyle \frac{1}{2}\epsilon$. Suppose this value is $\displaystyle n = r$. And I can make the difference between $\displaystyle b_n$ and $\displaystyle b$ less than $\displaystyle \frac{1}{2}\epsilon$ too, by taking $\displaystyle n$ large enough, say $\displaystyle n = s$. Then if I choose the value of $\displaystyle n$ to be the larger of $\displaystyle r$ and $\displaystyle s$, then the difference between $\displaystyle a_n +b_n$ and $\displaystyle a+b$ will be less than $\displaystyle \epsilon$. And that completes the proof.

Have a go at the second one in a similar way.

Hi Kalashnikov

Talking about convergence of sequences is a bit like doing differentiation from first principles. (Did you understand that when you first did it?) It's all about what we mean by 'limiting values'.

When we start to learn about differentiation we say things like:

$\displaystyle \frac{dy}{dx}=\lim_{\delta x \to 0}\frac{\delta y}{\delta x}$

So what do all these symbols mean?

Well, they mean that you can make value of $\displaystyle \frac{\delta y}{\delta x}$ (which represents the gradient of a chord) as close to $\displaystyle \frac{dy}{dx}$ (the gradient of the tangent) as you like by taking $\displaystyle \delta x$ close enough to zero.

It's important that you understand that last sentence, so read it again!

Let me explain it a bit more.

You can't actually put $\displaystyle \delta x$ equal to zero in an expression like $\displaystyle \frac{\delta y}{\delta x}$ to find the gradient of a tangent, because that would violate the First Commandment of Arithmetic: Thou shalt not divide by zero. So you have to sneak up on the value by letting $\displaystyle \delta z$ get ever closer to zero without actually ever getting there.

And it's like that with convergence of sequences, except that we're now talking about 'infinity' (whatever that is) rather than zero. So, if we want to say that the sequence $\displaystyle a_1, a_2,...$ converges to the limit $\displaystyle a$, we would write:

$\displaystyle \lim_{n \to \infty}a_n=a$

and this means something very similar to the differentiation thing, which is: You can make the value of $\displaystyle {a_n}$ as close to $\displaystyle a$ as you like, by taking $\displaystyle n$ large enough.

Again, that sentence is really important, so read it again!

Let me give you a simple example to show you what I mean. Look at the successive sums of the series:

$\displaystyle \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...$

The successive sums are:

$\displaystyle \frac{1}{2}, \frac{3}{4}, \frac{7}{8}, \frac{15}{16},...$

and it's pretty obvious that this sequence of numbers converges to the number 2. If we want to write this in formal notation, we should say:

Let $\displaystyle S_n$ be the sum of the first n terms of the series $\displaystyle \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...$. Then $\displaystyle \lim_{n \to \infty}S_n=2$

So what does this mean, again? Right: you can make $\displaystyle S_n$ as close to 2 as you like by taking $\displaystyle n$ large enough. Can you actually make $\displaystyle S_n$ equal to 2? No. But you can get as close as you like. In other words, if someone challenges you to get within 0.000...(a million 0's)...01 of the answer 2, you could still do it by taking a value of $\displaystyle n$ big enough.

That's all it means: no more, no less.

So, how do you do you first question?

You need to show that you can get as close to $\displaystyle a+b$ as you like by taking $\displaystyle n$ large enough. So how close would you like to get? Within a small amount $\displaystyle \epsilon$, say? OK, then you can argue something like this:

By definition, I know that if I take $\displaystyle n$ large enough I can make the difference between $\displaystyle a_n$ and $\displaystyle a$ less than $\displaystyle \frac{1}{2}\epsilon$. Suppose this value is $\displaystyle n = r$. And I can make the difference between $\displaystyle b_n$ and $\displaystyle b$ less than $\displaystyle \frac{1}{2}\epsilon$ too, by taking $\displaystyle n$ large enough, say $\displaystyle n = s$. Then if I choose the value of $\displaystyle n$ to be the larger of $\displaystyle r$ and $\displaystyle s$, then the difference between $\displaystyle a_n +b_n$ and $\displaystyle a+b$ will be less than $\displaystyle \epsilon$. And that completes the proof.

Have a go at the second one in a similar way.

Okay, this actually make's some sense to me. How would I go about writing it out with the correct notation? Like the other poster did above.

6. Originally Posted by Kalashnikov
Okay, this actually make's some sense to me. How would I go about writing it out with the correct notation? Like the other poster did above.
Ill assume you are just talking about $\displaystyle \mathbb{R}$, if not then just say so.

$\displaystyle \left\{a_n\right\}$ converges to $\displaystyle a$ we say that $\displaystyle \lim_{n\to\infty}a_n=a$. Now for this to be true we must have that there exists a $\displaystyle a\in\mathbb{R}$ with the following property: For every $\displaystyle \varepsilon>0$ there exists a $\displaystyle N$ such that if $\displaystyle N\leqslant n$ then $\displaystyle \left|a_n-a\right|<\varepsilon$. Or as Grandad said, you can make $\displaystyle a_n$ as arbitrarily close to $\displaystyle a$ as you want by making $\displaystyle n$ arbitrarily large.

$\displaystyle \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...$

The successive sums are:

$\displaystyle \frac{1}{2}, \frac{3}{4}, \frac{7}{8}, \frac{15}{16},...$

and it's pretty obvious that this sequence of numbers converges to the number 2.
Good explanation Grandad, but note that $\displaystyle \sum_{n=1}^{\infty}\frac{1}{2^n}=1$, not 2. I believe you meant the sum $\displaystyle 1+\frac{1}{2}+\frac{1}{4}+\cdots=\sum_{n=0}^{\inft y}\frac{1}{2^n}=2$, just in case any others were confused.

8. Originally Posted by Ziaris
Good explanation Grandad, but note that $\displaystyle \sum_{n=1}^{\infty}\frac{1}{2^n}=1$, not 2. I believe you meant the sum $\displaystyle 1+\frac{1}{2}+\frac{1}{4}+\cdots=\sum_{n=0}^{\inft y}\frac{1}{2^n}=2$, just in case any others were confused.
Yes, of course. Sorry!