# 2 convergence in distribution questions

• Sep 27th 2008, 08:19 PM
yui
2 convergence in distribution questions
First one:
Suppose $\displaystyle X_n \sim Bin(n, p)$. Show that as $\displaystyle n \rightarrow \infty$
$\displaystyle \frac{X^2_n - n^2 p^2}{2(pn)^\frac{3}{2} \sqrt{1-p})} \stackrel{D}{\rightarrow} Z \sim N(0,1)$
In a previous question I proved that $\displaystyle \frac{X_n - np}{\sqrt{np(1-p)}} \stackrel{D}{\rightarrow} Z \sim N(0,1)$, so for this one, do I just have to show that the left hand side is $\displaystyle X^2_n$ and that from the continuous mapping theorem, it also converges in distribution to Z as well. But I got stuck on showing that the left hand side is $\displaystyle X^2_n$ so I'm not sure if that's the way to go.

Second one:
Show that $\displaystyle (1-X_n)^{-1} \stackrel{D}{\rightarrow} (1-X)^{-1}$ given that $\displaystyle X_n \sim P(X_n = \frac{i}{n}) = \frac{1}{n}, i = 0, 1, 2, 3, .. n-1$
I tried to prove it using the MGF of Xn but I got stuck on the algebra after
$\displaystyle E(e^{\lambda X_n}) = \sum_{x=\frac{i}{n}}^{\frac{n-1}{n}} e^{\lambda x} P(X=x)$
I'm not even sure the sum is right since I didn't write anything down for i. Is using the MGF the way to go or should I go back to the definition or use some other method?
• Sep 28th 2008, 09:23 AM
Laurent
For your first question, the key trick is to write $\displaystyle X_n^2-n^2p^2=(X_n-np)(X_n+np)$. You know that $\displaystyle X_n-np$, suitably normalized, converges in distribution. As for $\displaystyle X_n+np$, the LLN shows that $\displaystyle \frac{X_n+np}{np}=\frac{X_n}{np}+1$ converges almost-surely (and hence in distribution) to 2.

For the second one, do you have an intuition of what $\displaystyle X_n$ looks like? Once you have it, prove that $\displaystyle (X_n)_n$ converges in distribution to you-guessed-what, using the cumulative distribution function (which I prefer) or the MGF. Then deduce the result about $\displaystyle \frac{1}{1-X}$.

By the way, the MGF is $\displaystyle E[e^{\lambda X_n}]=\sum_{i=0}^{n-1}e^{\lambda\frac{i}{n}}P(X_n=\frac{i}{n})=\sum_{i =0}^{n-1}e^{\lambda\frac{i}{n}}\frac{1}{n}$. You can simplify this expression (sum of $\displaystyle n$ terms in a geometric sequence...).
• Sep 28th 2008, 08:24 PM
yui
Ok so for the first one:
$\displaystyle \frac{X^2_n - n^2 p^2}{2(pn)^\frac{3}{2} \sqrt{1-p})} = \frac{X_n - np}{\sqrt{np(1-p)}} \frac{X_n + np}{np} \frac{1}{2}$

$\displaystyle \frac{X_n+np}{np} = \frac{X_n}{np} +1$ converges to 2
In a previous question I found that $\displaystyle \frac{X_n - np}{\sqrt{np(1-p)}}$ converged in distribution to N(0,1), then the above equation converges to N(0,1) as well.

I'm still a bit lost on the second question though.
Xn is discrete normal (?) from 0-1 with probability 1/n then X is continuous normal from 0-1 as well.
$\displaystyle E[e^{\lambda X_n}] = \sum_{i=0}^{n-1} e^{\lambda\frac{i}{n}}\frac{1}{n} = \frac{n-1}{n} \sum_{i=0}^{n-1} e^{\lambda\frac{i}{n}}$

$\displaystyle E[e^{\lambda X}] = \int_{0}^{1} e^{\lambda X} dx = \frac{e^{\lambda x} - 1}{\lambda}$
Not sure what to do with the sigma part in the first MGF still.
• Sep 28th 2008, 10:26 PM
Laurent
Second part: $\displaystyle X_n$ is chosen uniformly in $\displaystyle \{0,\frac{1}{n},\ldots,\frac{n-1}{n}\}$. As $\displaystyle n$ tends to $\displaystyle +\infty$, $\displaystyle X_n$ looks more and more like it were uniformly distributed on $\displaystyle [0,1]$, doesn't it? There's no normal distribution here.

$\displaystyle E[e^{\lambda X_n}] = \sum_{i=0}^{n-1} e^{\lambda\frac{i}{n}}\frac{1}{n} = \frac{1}{n}\sum_{i=0}^{n-1}\left(e^{\frac{\lambda}{n}}\right)^i=\frac{1}{n} \frac{e^{\lambda}-1}{e^{\frac{\lambda}{n}}-1}$ and $\displaystyle e^{\frac{\lambda}{n}}-1\sim \frac{\lambda}{n}$ as $\displaystyle n$ tends to $\displaystyle \infty$...
• Sep 29th 2008, 01:04 AM
yui
Oops, I meant discrete and continuous uniform, must've been mixed up from the first question :(

$\displaystyle \frac{1}{n}\frac{e^{\lambda}-1}{e^{\frac{\lambda}{n}}-1} \rightarrow \frac{e^\lambda -1}{\lambda}$ as $\displaystyle n$ tends to $\displaystyle \infty$ which is $\displaystyle E[e^{\lambda X}]$

Thank you so much for your help Laurent.