# Thread: Convergence in probability

1. ## Convergence in probability

I guess, it's tough one. I think I should use strong law of large numbers, but I don't know how.
Random variables $X_{1},X_{2},...$ are independent and $X_{k}$ with gaussian distribution $N(k^{1/2},k)$ k=1,2,..
Prove that following sequence

$\frac{1}{n^{2}}(X_{1}X_{2}+X_{3}X_{4}+...+X_{2n-1}X_{2n})$

is convergent in probability and find the limit.

Thx for any help.

2. Hello,

LLN doesn't suit. Yes intuitively the 1/nē is similar to the 1/n in the LLN. But here we're looking for the convergence in probability, which appears in the small law of large numbers, a barely used theorem.
That's why I think that it isn't the LLN we need here.

After wandering, I'm thinking of using Chebychev's & Jensen's inequalities. The problem is that nowhere will I use the fact that it's a normal distribution... If you added the normal distribution by some mistake, please tell

-----------------------------------------

Let $\displaystyle Z=\frac{1}{n^2}\sum_{k=1}^n X_{2k-1}X_{2k}$

Chebychev's inequality can be used because $(X_k)$ is an independent sequence with a finite second moment.

This gives $\displaystyle \forall \epsilon>0,~ P(|Z-E[Z]|\geq \epsilon)\leq \frac{Var[Z]}{\epsilon^2}$

$\displaystyle n^2 E[Z]=\sum_{k=1}^n E[X_{2k-1}]E[X_{2k}]=\sum_{k=1}^n \sqrt{(2k-1)(2k)}$

Jensen's inequality is usually for convex functions, but for concave functions you just have to reverse the inequality !

So since the square root function is concave, we get that $\displaystyle n^2 E[Z]\leq \sqrt{\sum_{k=1}^n 4k^2-2k}$

But we know that the sum the squares of the first n integers $\sim n^3$ when n gets big. So we can say that $E[Z]\leq a_n \sim \frac{1}{\sqrt{n}}$ as n goes to infinity. If you want to prove it properly well you can do the calculations, but I don't think it's necessary.

Hence $E[Z]$ tends to 0 as n tends to infinity.

~~~~~~~~~~~~~~~~~~~~~~

Now let's compute $Var[Z]$

$\displaystyle n^4 Var[Z]=\sum_{k=1}^n Var[X_{2k-1}X_{2k}]$ by independence of the random variables.

For a given k,
\displaystyle \begin{aligned} Var[X_{2k-1}X_{2k}]&=E\left[X_{2k-1}^2 X_{2k}^2\right]-\left[E[X_{2k-1}X_{2k}]\right]^2 \\
&=(2k-1+2k-1)(2k+2k)-(2k-1)(2k) \\
&=3(2k-1)(2k) \end{aligned}

(I'll let you write the missing steps, I'm not here to do the details )

So here again we have an equivalent in $n^3$.

Hence $Var[Z] \sim \frac 1n$ as n goes to infinity.

~~~~~~~~~~~~~~~~~~~~~~

Finally, $\forall \epsilon >0,~ P(|Z-E[Z]|\geq \epsilon) \to 0$ when $n\to\infty$.

We're almost there. Now we have to prove that Z converges to 0 in probability, by noting that $Z=Z-E[Z]+E[Z]$ (and having in mind that $E[Z]\to 0$ a.s. hence in probability)

$\displaystyle \forall \epsilon>0,~P(|Z|\geq \epsilon)\leq P(|Z-E[Z]|+E[Z]\geq \epsilon)$.

- Why ? Because by the triangle inequality, if $|Z|\geq \epsilon$, then $|Z-E[Z]|+E[Z]\geq \epsilon$. Thus $\{\omega~:~ |Z(\omega)|\geq \epsilon\}\subseteq \{\omega~:~|Z(\omega)-E[Z]|+E[Z]\geq \epsilon\}$ and hence the above inequality. -

Since $|Z-E[Z]|$ and $E[Z]$ are both positive, $\{|Z-E[Z]|+E[Z]\geq \epsilon\}\subseteq \{|Z-E[Z]|\geq \epsilon/2\}\cup \{E[Z]\geq \epsilon/2\}$.

Hence \displaystyle \begin{aligned} \forall \epsilon >0,~P(|Z|\geq \epsilon) &\leq P(|Z-E[Z]|+E[Z]\geq \epsilon) \\
&\leq P(\{|Z-E[Z]|\geq \epsilon/2\}\cup \{E[Z]\geq \epsilon/2\}) \\
&\leq P(|Z-E[Z]|\geq \epsilon/2)+P(E[Z]\geq \epsilon/2) \\
&\to 0 \end{aligned}

which proves that Z converges to 0 in probability.........

3. You can probably get almost sure convergence.
I was looking at

$\sum_{k=1}^n{Y_k\over k^2}$

where the Y's are the pair of adjacent normal rvs.
If you can get that sum to coverge a.s., by the three series theorem
then by Kronecker you have

${\sum_{k=1}^nY_k\over n^2}\to 0$

But its messy.... showing that

$\sum_{k=1}^{\infty} P\left( |Y_k|>k^2\right)<\infty$

4. It's impossible to show that for a normal distribution... The result of the problem holds for any distribution (now that I'm sure ) with a finite 2nd moment.
Then for some distribution, one may get the a.s. convergence, but it would be a problem on its own!

5. It's impossible to show that

$\sum_{k=1}^{\infty} P\left( |Y_k|>k^2\right)<\infty$?

That just involves a double integral, the joint density is known.

6. Have you tried the calculations ?

7. I don't want to, as I said they are messy.
One needs to bound the double integral of

$\int\int_{|xy| for that pair of normals.

Using the st deviations and means of $\sqrt{2k}$
But again we don't need to get a precise answer, we only need to obtain bounds
so that the series converge.