# Central Limit Theorem

• Feb 18th 2008, 11:10 AM
heathrowjohnny
Central Limit Theorem
Let $X_{1}, X_{2}, \ldots, X_{n}$ be a random sample from a distribution with mean $\mu$ and variance $\sigma^{2}$. Then, in the limit as $n \to \infty$, the standardized versions of $\bar{X}$ and $T_0$ have the standard normal distribution. That is $\lim_{n \to \infty} P \left (\frac{\bar{X} - \mu}{\sigma/\sqrt{n}} \leq z \right) = P(Z \leq z) = \Phi(z)$ and $\lim_{n \to \infty} P \left (\frac{T_0 - n\mu}{\sqrt{n}\sigma} \leq z \right) = P(Z \leq z) = \Phi(z)$.

My question is, why should this be the case? Why shouldn't $\bar{X}$ and $T_{0}$ have a lognormal distribution, an exponential distribution, etc..?
• Feb 18th 2008, 09:49 PM
CaptainBlack
Quote:

Originally Posted by heathrowjohnny
Let $X_{1}, X_{2}, \ldots, X_{n}$ be a random sample from a distribution with mean $\mu$ and variance $\sigma^{2}$. Then, in the limit as $n \to \infty$, the standardized versions of $\bar{X}$ and $T_0$ have the standard normal distribution. That is $\lim_{n \to \infty} P \left (\frac{\bar{X} - \mu}{\sigma/\sqrt{n}} \leq z \right) = P(Z \leq z) = \Phi(z)$ and $\lim_{n \to \infty} P \left (\frac{T_0 - n\mu}{\sqrt{n}\sigma} \leq z \right) = P(Z \leq z) = \Phi(z)$.

My question is, why should this be the case? Why shouldn't $\bar{X}$ and $T_{0}$ have a lognormal distribution, an exponential distribution, etc..?

Well for one thing these other distributions don't have the property that
the mean of two independent identically distributed RV's don't have the same
distribution (but with half the variance of either). Any limiting distribution for
mean must have this property.

Essentially you are looking for stable distributions, once you have these you
will find that there is a generalised central limit theorem in which other stable
distributions may appear.