# Central Limit Theorem

• Feb 4th 2010, 03:25 PM
h2osprey
Central Limit Theorem
Let $X_1, X_2, ...$ be a sequence of independent random variables with $E(X_i) = \mu_i$ and $Var(X_i) = (\sigma_i)^2$. Show that if $n^{-1} \sum_{i=1}^{n} \mu_i \rightarrow \mu$ and $n^{-2}\sum_{i=1}^{n}{\sigma_i}^2\rightarrow 0$, then $\bar{X} \rightarrow \mu$ in probability.
• Feb 4th 2010, 09:47 PM
matheagle
Quote:

Originally Posted by h2osprey
Let $X_1, X_2, ...$ be a sequence of independent random variables with $E(X_i) = \mu_i$ and $Var(X_i) = (\sigma_i)^2$. Show that if $n^{-1} \sum_{i=1}^{n} \mu_i \rightarrow \mu$, then $\bar{X} \rightarrow \mu$ in probability.

you want $P\left(\left|{\sum_{i=1}^n X_i\over n}-\mu\right|>\epsilon\right)\to 0$ for all $\epsilon>0$

Try adding and subtracting ${\sum_{i=1}^n \mu_i\over n}$

$P\left(\left|{\sum_{i=1}^n X_i\over n}-\mu\right|>\epsilon\right)$

$\le P\left(\left| {\sum_{i=1}^n X_i\over n} -{\sum_{i=1}^n \mu_i\over n}\right|>\epsilon/2\right) +P\left(\left| {\sum_{i=1}^n \mu_i\over n} -\mu\right|>\epsilon/2\right)$
• Feb 6th 2010, 10:10 PM
h2osprey
Quote:

Originally Posted by matheagle
you want $P\left(\left|{\sum_{i=1}^n X_i\over n}-\mu\right|>\epsilon\right)\to 0$ for all $\epsilon>0$

Try adding and subtracting ${\sum_{i=1}^n \mu_i\over n}$

$P\left(\left|{\sum_{i=1}^n X_i\over n}-\mu\right|>\epsilon\right)$

$\le P\left(\left| {\sum_{i=1}^n X_i\over n} -{\sum_{i=1}^n \mu_i\over n}\right|>\epsilon/2\right) +P\left(\left| {\sum_{i=1}^n \mu_i\over n} -\mu\right|>\epsilon/2\right)$

Ah I suppose after that you just apply Chebyshev's inequality and the proof is done.

Thing is, I'm not quite sure how the inequality you mentioned holds? I can see intuitively how it works, but is there a theorem or proof or something for that?

EDIT: Nvm, figured it out. Thanks for everything!

Also, I edited the question, left out the fact that the variances converge (I suppose this is for applying Chebyshev's inequality later.)
• Feb 6th 2010, 10:15 PM
matheagle
I KNEW you left something out!
Second term goes to zero since they are constants and the different goes to zero.
And you square the absolute value and use markov/chebyshev's in the first.
I didn't think this was complete.
Either you needed that the sum of the variances divided by the square of n to zero
OR another way was that all the variances were equal OR the sup of the variances was bounded....
But something was off.
• Feb 6th 2010, 10:25 PM
matheagle
Quote:

Originally Posted by h2osprey
Thing is, I'm not quite sure how the inequality you mentioned holds? I can see intuitively how it works, but is there a theorem or proof or something for that?

IF $P(|A+B|>2c)$ then $P(\{|A|>c\} \cup \{|B|>c\})$

So $P(\{|A|>c\} \cup \{|B|>c\}) \le P(|A|>c)+P(|B|>c)$
• Feb 7th 2010, 09:14 AM
h2osprey
Yup that's what I got thanks a lot for the help!

Sorry for the missing variances thing.. it was part of the previous question so I left it out by accident (Itwasntme)
• Feb 7th 2010, 01:50 PM
matheagle
I couldn't finish your problem since it didn't seem correct, without some more info.
That's why I stopped after using the triangle inequality.