Results 1 to 8 of 8

Math Help - convergence

  1. #1
    Newbie
    Joined
    Dec 2008
    Posts
    2

    convergence

    I'm actually completely lost with convergence of sequences.. Let's just say our lecturer isn't the best lol.

    Here's one of the easier questions I need to do.

    Given that the sequences a_1, a_2,... and b_1, b_2,... converge to limits a and b respectively, show that:

    (a) The sequence c_n = a_n + b_n converges to a + b;

    (b) If a > 0, the sequence d_n = \frac{1}{a_n} converges to \frac{1}{a}.


    How do I go about showing something converges? This is all completely new to me..
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Senior Member
    Joined
    Nov 2008
    Posts
    461
    Hi there


    Quote Originally Posted by Kalashnikov View Post
    I'm actually completely lost with convergence of sequences.. Let's just say our lecturer isn't the best lol.

    Here's one of the easier questions I need to do.

    Given that the sequences a_1, a_2,... and b_1, b_2,... converge to limits a and b respectively, show that:

    (a) The sequence c_n = a_n + b_n converges to a + b;
    This is pretty easy,
    you want to show that

     lim_{n \to \infty} (a_n + b_n) = a + b

      \epsilon > 0 \Rightarrow \frac{\epsilon}{2}>0

      a_n  \mbox{ and }b_n converge

     \Rightarrow \exists N_1 , N_2 \in \mathbb{N} :

     |a_n - a| < \frac{\epsilon}{2}, n \ge N_1

     |b_n - b| < \frac{\epsilon}{2}, n \ge N_2

     \forall n \ge N := max(N_1, N_2) it is

     |(a_n+b_n) - (a+b)| \le |a_n - a| + |b_n - b| < \frac{\epsilon}{2} +\frac{\epsilon}{2}  = \epsilon

    Do you understand?

    Rapha
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Senior Member
    Joined
    Nov 2008
    Posts
    461
    Hello again.

    I thought about this:
    Quote Originally Posted by Kalashnikov View Post
    (b) If a > 0, the sequence d_n = \frac{1}{a_n} converges to \frac{1}{a}.


     a \not= 0 \Rightarrow |a| / 2 > 0

     \Rightarrow \exists n_0 in \mathbb{N} : |a_n - a| < \frac{|a|}{2} \forall n \ge n_0

      \Rightarrow |a_n| \ge |a| /2 , a_n \not=0 \mbox{ if } n \ge n_0

    Let  \epsilon > 0 \exists N_1 \mathbb{N} :

     |a_n - a| < \frac{\epsilon |a|^2}{2} \forall n \ge N_1

    then

     n \ge N := max(n_0,N_1)

     |\frac{1}{a_n}-\frac{1}{a}| = \frac{1}{|a_n||a|} * |a-a_n| < \frac{2}{|a|^2}*\frac{\epsilon}{2} = \epsilon

      \Rightarrow lim(1 / a_n) = 1/a

    Regards,
    Rapha
    Last edited by Rapha; December 13th 2008 at 07:57 AM.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor
    Grandad's Avatar
    Joined
    Dec 2008
    From
    South Coast of England
    Posts
    2,570

    Convergence and Limits

    Hi Kalashnikov

    Quote Originally Posted by Kalashnikov View Post
    I'm actually completely lost with convergence of sequences.. Let's just say our lecturer isn't the best lol.

    Here's one of the easier questions I need to do.

    Given that the sequences a_1, a_2,... and b_1, b_2,... converge to limits a and b respectively, show that:

    (a) The sequence c_n = a_n + b_n converges to a + b;

    (b) If a > 0, the sequence d_n = \frac{1}{a_n} converges to \frac{1}{a}.


    How do I go about showing something converges? This is all completely new to me..
    Talking about convergence of sequences is a bit like doing differentiation from first principles. (Did you understand that when you first did it?) It's all about what we mean by 'limiting values'.

    When we start to learn about differentiation we say things like:

     \frac{dy}{dx}=\lim_{\delta x \to 0}\frac{\delta y}{\delta x}

    So what do all these symbols mean?

    Well, they mean that you can make value of \frac{\delta y}{\delta x} (which represents the gradient of a chord) as close to  \frac{dy}{dx} (the gradient of the tangent) as you like by taking \delta x close enough to zero.

    It's important that you understand that last sentence, so read it again!

    Let me explain it a bit more.

    You can't actually put \delta x equal to zero in an expression like \frac{\delta y}{\delta x} to find the gradient of a tangent, because that would violate the First Commandment of Arithmetic: Thou shalt not divide by zero. So you have to sneak up on the value by letting \delta z get ever closer to zero without actually ever getting there.

    And it's like that with convergence of sequences, except that we're now talking about 'infinity' (whatever that is) rather than zero. So, if we want to say that the sequence a_1, a_2,... converges to the limit a, we would write:

    \lim_{n \to \infty}a_n=a

    and this means something very similar to the differentiation thing, which is: You can make the value of {a_n} as close to a as you like, by taking n large enough.

    Again, that sentence is really important, so read it again!

    Let me give you a simple example to show you what I mean. Look at the successive sums of the series:

    \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...

    The successive sums are:

    \frac{1}{2}, \frac{3}{4}, \frac{7}{8}, \frac{15}{16},...

    and it's pretty obvious that this sequence of numbers converges to the number 2. If we want to write this in formal notation, we should say:

    Let S_n be the sum of the first n terms of the series \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+.... Then \lim_{n \to \infty}S_n=2

    So what does this mean, again? Right: you can make S_n as close to 2 as you like by taking n large enough. Can you actually make S_n equal to 2? No. But you can get as close as you like. In other words, if someone challenges you to get within 0.000...(a million 0's)...01 of the answer 2, you could still do it by taking a value of n big enough.

    That's all it means: no more, no less.

    So, how do you do you first question?

    You need to show that you can get as close to a+b as you like by taking n large enough. So how close would you like to get? Within a small amount \epsilon , say? OK, then you can argue something like this:

    By definition, I know that if I take n large enough I can make the difference between a_n and a less than \frac{1}{2}\epsilon. Suppose this value is n = r. And I can make the difference between b_n and b less than \frac{1}{2}\epsilon too, by taking n large enough, say n = s. Then if I choose the value of n to be the larger of r and s, then the difference between a_n +b_n and a+b will be less than \epsilon. And that completes the proof.

    Have a go at the second one in a similar way.

    Grandad
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Newbie
    Joined
    Dec 2008
    Posts
    2
    Quote Originally Posted by Grandad View Post
    Hi Kalashnikov



    Talking about convergence of sequences is a bit like doing differentiation from first principles. (Did you understand that when you first did it?) It's all about what we mean by 'limiting values'.

    When we start to learn about differentiation we say things like:

     \frac{dy}{dx}=\lim_{\delta x \to 0}\frac{\delta y}{\delta x}

    So what do all these symbols mean?

    Well, they mean that you can make value of \frac{\delta y}{\delta x} (which represents the gradient of a chord) as close to  \frac{dy}{dx} (the gradient of the tangent) as you like by taking \delta x close enough to zero.

    It's important that you understand that last sentence, so read it again!

    Let me explain it a bit more.

    You can't actually put \delta x equal to zero in an expression like \frac{\delta y}{\delta x} to find the gradient of a tangent, because that would violate the First Commandment of Arithmetic: Thou shalt not divide by zero. So you have to sneak up on the value by letting \delta z get ever closer to zero without actually ever getting there.

    And it's like that with convergence of sequences, except that we're now talking about 'infinity' (whatever that is) rather than zero. So, if we want to say that the sequence a_1, a_2,... converges to the limit a, we would write:

    \lim_{n \to \infty}a_n=a

    and this means something very similar to the differentiation thing, which is: You can make the value of {a_n} as close to a as you like, by taking n large enough.

    Again, that sentence is really important, so read it again!

    Let me give you a simple example to show you what I mean. Look at the successive sums of the series:

    \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...

    The successive sums are:

    \frac{1}{2}, \frac{3}{4}, \frac{7}{8}, \frac{15}{16},...

    and it's pretty obvious that this sequence of numbers converges to the number 2. If we want to write this in formal notation, we should say:

    Let S_n be the sum of the first n terms of the series \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+.... Then \lim_{n \to \infty}S_n=2

    So what does this mean, again? Right: you can make S_n as close to 2 as you like by taking n large enough. Can you actually make S_n equal to 2? No. But you can get as close as you like. In other words, if someone challenges you to get within 0.000...(a million 0's)...01 of the answer 2, you could still do it by taking a value of n big enough.

    That's all it means: no more, no less.

    So, how do you do you first question?

    You need to show that you can get as close to a+b as you like by taking n large enough. So how close would you like to get? Within a small amount \epsilon , say? OK, then you can argue something like this:

    By definition, I know that if I take n large enough I can make the difference between a_n and a less than \frac{1}{2}\epsilon. Suppose this value is n = r. And I can make the difference between b_n and b less than \frac{1}{2}\epsilon too, by taking n large enough, say n = s. Then if I choose the value of n to be the larger of r and s, then the difference between a_n +b_n and a+b will be less than \epsilon. And that completes the proof.

    Have a go at the second one in a similar way.

    Grandad
    Okay, this actually make's some sense to me. How would I go about writing it out with the correct notation? Like the other poster did above.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor Mathstud28's Avatar
    Joined
    Mar 2008
    From
    Pennsylvania
    Posts
    3,641
    Quote Originally Posted by Kalashnikov View Post
    Okay, this actually make's some sense to me. How would I go about writing it out with the correct notation? Like the other poster did above.
    Ill assume you are just talking about \mathbb{R}, if not then just say so.

    \left\{a_n\right\} converges to a we say that \lim_{n\to\infty}a_n=a. Now for this to be true we must have that there exists a a\in\mathbb{R} with the following property: For every \varepsilon>0 there exists a N such that if N\leqslant n then \left|a_n-a\right|<\varepsilon. Or as Grandad said, you can make a_n as arbitrarily close to a as you want by making n arbitrarily large.
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Junior Member Ziaris's Avatar
    Joined
    Dec 2008
    Posts
    26
    Quote Originally Posted by Grandad View Post
    \frac{1}{2}+ \frac{1}{4}+\frac{1}{8}+ \frac{1}{16}+...

    The successive sums are:

    \frac{1}{2}, \frac{3}{4}, \frac{7}{8}, \frac{15}{16},...

    and it's pretty obvious that this sequence of numbers converges to the number 2.
    Good explanation Grandad, but note that \sum_{n=1}^{\infty}\frac{1}{2^n}=1, not 2. I believe you meant the sum 1+\frac{1}{2}+\frac{1}{4}+\cdots=\sum_{n=0}^{\inft  y}\frac{1}{2^n}=2, just in case any others were confused.
    Follow Math Help Forum on Facebook and Google+

  8. #8
    MHF Contributor
    Grandad's Avatar
    Joined
    Dec 2008
    From
    South Coast of England
    Posts
    2,570
    Quote Originally Posted by Ziaris View Post
    Good explanation Grandad, but note that \sum_{n=1}^{\infty}\frac{1}{2^n}=1, not 2. I believe you meant the sum 1+\frac{1}{2}+\frac{1}{4}+\cdots=\sum_{n=0}^{\inft  y}\frac{1}{2^n}=2, just in case any others were confused.
    Yes, of course. Sorry!

    Grandad
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Replies: 1
    Last Post: May 13th 2010, 01:20 PM
  2. Replies: 2
    Last Post: May 1st 2010, 09:22 PM
  3. dominated convergence theorem for convergence in measure
    Posted in the Differential Geometry Forum
    Replies: 0
    Last Post: December 5th 2009, 04:06 AM
  4. Replies: 6
    Last Post: October 1st 2009, 09:10 AM
  5. Pointwise Convergence vs. Uniform Convergence
    Posted in the Calculus Forum
    Replies: 8
    Last Post: October 31st 2007, 05:47 PM

Search Tags


/mathhelpforum @mathhelpforum