# Math Help - Central Limit Theorem

1. ## Central Limit Theorem

Let $X_1, X_2, ...$ be a sequence of independent random variables with $E(X_i) = \mu_i$ and $Var(X_i) = (\sigma_i)^2$. Show that if $n^{-1} \sum_{i=1}^{n} \mu_i \rightarrow \mu$ and $n^{-2}\sum_{i=1}^{n}{\sigma_i}^2\rightarrow 0$, then $\bar{X} \rightarrow \mu$ in probability.

2. Originally Posted by h2osprey
Let $X_1, X_2, ...$ be a sequence of independent random variables with $E(X_i) = \mu_i$ and $Var(X_i) = (\sigma_i)^2$. Show that if $n^{-1} \sum_{i=1}^{n} \mu_i \rightarrow \mu$, then $\bar{X} \rightarrow \mu$ in probability.
you want $P\left(\left|{\sum_{i=1}^n X_i\over n}-\mu\right|>\epsilon\right)\to 0$ for all $\epsilon>0$

Try adding and subtracting ${\sum_{i=1}^n \mu_i\over n}$

$P\left(\left|{\sum_{i=1}^n X_i\over n}-\mu\right|>\epsilon\right)$

$\le P\left(\left| {\sum_{i=1}^n X_i\over n} -{\sum_{i=1}^n \mu_i\over n}\right|>\epsilon/2\right) +P\left(\left| {\sum_{i=1}^n \mu_i\over n} -\mu\right|>\epsilon/2\right)$

3. Originally Posted by matheagle
you want $P\left(\left|{\sum_{i=1}^n X_i\over n}-\mu\right|>\epsilon\right)\to 0$ for all $\epsilon>0$

Try adding and subtracting ${\sum_{i=1}^n \mu_i\over n}$

$P\left(\left|{\sum_{i=1}^n X_i\over n}-\mu\right|>\epsilon\right)$

$\le P\left(\left| {\sum_{i=1}^n X_i\over n} -{\sum_{i=1}^n \mu_i\over n}\right|>\epsilon/2\right) +P\left(\left| {\sum_{i=1}^n \mu_i\over n} -\mu\right|>\epsilon/2\right)$
Ah I suppose after that you just apply Chebyshev's inequality and the proof is done.

Thing is, I'm not quite sure how the inequality you mentioned holds? I can see intuitively how it works, but is there a theorem or proof or something for that?

EDIT: Nvm, figured it out. Thanks for everything!

Also, I edited the question, left out the fact that the variances converge (I suppose this is for applying Chebyshev's inequality later.)

4. I KNEW you left something out!
Second term goes to zero since they are constants and the different goes to zero.
And you square the absolute value and use markov/chebyshev's in the first.
I didn't think this was complete.
Either you needed that the sum of the variances divided by the square of n to zero
OR another way was that all the variances were equal OR the sup of the variances was bounded....
But something was off.

5. Originally Posted by h2osprey
Thing is, I'm not quite sure how the inequality you mentioned holds? I can see intuitively how it works, but is there a theorem or proof or something for that?
IF $P(|A+B|>2c)$ then $P(\{|A|>c\} \cup \{|B|>c\})$

So $P(\{|A|>c\} \cup \{|B|>c\}) \le P(|A|>c)+P(|B|>c)$

6. Yup that's what I got thanks a lot for the help!

Sorry for the missing variances thing.. it was part of the previous question so I left it out by accident