Results 1 to 7 of 7

Math Help - Convergence in probability

  1. #1
    Newbie
    Joined
    Dec 2010
    Posts
    10

    Convergence in probability

    I guess, it's tough one. I think I should use strong law of large numbers, but I don't know how.
    Random variables X_{1},X_{2},... are independent and X_{k} with gaussian distribution N(k^{1/2},k) k=1,2,..
    Prove that following sequence

    \frac{1}{n^{2}}(X_{1}X_{2}+X_{3}X_{4}+...+X_{2n-1}X_{2n})

    is convergent in probability and find the limit.

    Thx for any help.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Moo
    Moo is offline
    A Cute Angle Moo's Avatar
    Joined
    Mar 2008
    From
    P(I'm here)=1/3, P(I'm there)=t+1/3
    Posts
    5,618
    Thanks
    6
    Hello,

    LLN doesn't suit. Yes intuitively the 1/nē is similar to the 1/n in the LLN. But here we're looking for the convergence in probability, which appears in the small law of large numbers, a barely used theorem.
    That's why I think that it isn't the LLN we need here.

    After wandering, I'm thinking of using Chebychev's & Jensen's inequalities. The problem is that nowhere will I use the fact that it's a normal distribution... If you added the normal distribution by some mistake, please tell

    -----------------------------------------

    Let \displaystyle Z=\frac{1}{n^2}\sum_{k=1}^n X_{2k-1}X_{2k}

    Chebychev's inequality can be used because (X_k) is an independent sequence with a finite second moment.

    This gives \displaystyle \forall \epsilon>0,~ P(|Z-E[Z]|\geq \epsilon)\leq \frac{Var[Z]}{\epsilon^2}


    \displaystyle n^2 E[Z]=\sum_{k=1}^n E[X_{2k-1}]E[X_{2k}]=\sum_{k=1}^n \sqrt{(2k-1)(2k)}

    Jensen's inequality is usually for convex functions, but for concave functions you just have to reverse the inequality !

    So since the square root function is concave, we get that \displaystyle n^2 E[Z]\leq \sqrt{\sum_{k=1}^n 4k^2-2k}

    But we know that the sum the squares of the first n integers \sim n^3 when n gets big. So we can say that E[Z]\leq a_n \sim \frac{1}{\sqrt{n}} as n goes to infinity. If you want to prove it properly well you can do the calculations, but I don't think it's necessary.

    Hence E[Z] tends to 0 as n tends to infinity.

    ~~~~~~~~~~~~~~~~~~~~~~

    Now let's compute Var[Z]

    \displaystyle n^4 Var[Z]=\sum_{k=1}^n Var[X_{2k-1}X_{2k}] by independence of the random variables.

    For a given k,
    \displaystyle \begin{aligned} Var[X_{2k-1}X_{2k}]&=E\left[X_{2k-1}^2 X_{2k}^2\right]-\left[E[X_{2k-1}X_{2k}]\right]^2 \\<br />
&=(2k-1+2k-1)(2k+2k)-(2k-1)(2k) \\<br />
&=3(2k-1)(2k) \end{aligned}

    (I'll let you write the missing steps, I'm not here to do the details )

    So here again we have an equivalent in n^3.

    Hence Var[Z] \sim \frac 1n as n goes to infinity.

    ~~~~~~~~~~~~~~~~~~~~~~

    Finally, \forall \epsilon >0,~ P(|Z-E[Z]|\geq \epsilon) \to 0 when n\to\infty.


    We're almost there. Now we have to prove that Z converges to 0 in probability, by noting that Z=Z-E[Z]+E[Z] (and having in mind that E[Z]\to 0 a.s. hence in probability)


    \displaystyle \forall \epsilon>0,~P(|Z|\geq \epsilon)\leq P(|Z-E[Z]|+E[Z]\geq \epsilon).

    - Why ? Because by the triangle inequality, if |Z|\geq \epsilon, then |Z-E[Z]|+E[Z]\geq \epsilon. Thus \{\omega~:~ |Z(\omega)|\geq \epsilon\}\subseteq \{\omega~:~|Z(\omega)-E[Z]|+E[Z]\geq \epsilon\} and hence the above inequality. -

    Since |Z-E[Z]| and E[Z] are both positive, \{|Z-E[Z]|+E[Z]\geq \epsilon\}\subseteq \{|Z-E[Z]|\geq \epsilon/2\}\cup \{E[Z]\geq \epsilon/2\}.

    Hence \displaystyle \begin{aligned} \forall \epsilon >0,~P(|Z|\geq \epsilon) &\leq P(|Z-E[Z]|+E[Z]\geq \epsilon) \\<br />
&\leq P(\{|Z-E[Z]|\geq \epsilon/2\}\cup \{E[Z]\geq \epsilon/2\}) \\<br />
&\leq P(|Z-E[Z]|\geq \epsilon/2)+P(E[Z]\geq \epsilon/2) \\<br />
&\to 0 \end{aligned}

    which proves that Z converges to 0 in probability.........
    Last edited by Moo; January 17th 2011 at 03:42 AM.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    You can probably get almost sure convergence.
    I was looking at

    \sum_{k=1}^n{Y_k\over k^2}

    where the Y's are the pair of adjacent normal rvs.
    If you can get that sum to coverge a.s., by the three series theorem
    then by Kronecker you have

    {\sum_{k=1}^nY_k\over n^2}\to 0

    But its messy.... showing that

    \sum_{k=1}^{\infty} P\left( |Y_k|>k^2\right)<\infty
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Moo
    Moo is offline
    A Cute Angle Moo's Avatar
    Joined
    Mar 2008
    From
    P(I'm here)=1/3, P(I'm there)=t+1/3
    Posts
    5,618
    Thanks
    6
    It's impossible to show that for a normal distribution... The result of the problem holds for any distribution (now that I'm sure ) with a finite 2nd moment.
    Then for some distribution, one may get the a.s. convergence, but it would be a problem on its own!
    Follow Math Help Forum on Facebook and Google+

  5. #5
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    It's impossible to show that

    \sum_{k=1}^{\infty} P\left( |Y_k|>k^2\right)<\infty?

    That just involves a double integral, the joint density is known.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Moo
    Moo is offline
    A Cute Angle Moo's Avatar
    Joined
    Mar 2008
    From
    P(I'm here)=1/3, P(I'm there)=t+1/3
    Posts
    5,618
    Thanks
    6
    Have you tried the calculations ?
    Follow Math Help Forum on Facebook and Google+

  7. #7
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    I don't want to, as I said they are messy.
    One needs to bound the double integral of

    \int\int_{|xy|<k^2} f(x,y) dA for that pair of normals.

    Using the st deviations and means of \sqrt{2k}
    But again we don't need to get a precise answer, we only need to obtain bounds
    so that the series converge.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Convergence in Probability
    Posted in the Advanced Statistics Forum
    Replies: 2
    Last Post: October 8th 2011, 08:04 PM
  2. Convergence in probability but no convergence in L^p nor A.S.
    Posted in the Advanced Statistics Forum
    Replies: 3
    Last Post: April 16th 2011, 05:17 AM
  3. Convergence in probability
    Posted in the Advanced Statistics Forum
    Replies: 5
    Last Post: January 30th 2011, 06:12 PM
  4. fundamental in probability & convergence with probability 1
    Posted in the Advanced Statistics Forum
    Replies: 3
    Last Post: February 23rd 2010, 09:58 AM
  5. Almost sure convergence & convergence in probability
    Posted in the Advanced Statistics Forum
    Replies: 9
    Last Post: November 27th 2009, 11:31 PM

Search Tags


/mathhelpforum @mathhelpforum