1. [SOLVED] A convergence proof.

How do you prove this?

Suppose $\displaystyle X_n$ converges to $\displaystyle X$ in distribution and $\displaystyle Y_n$ converges in probability to $\displaystyle 0$. Show that $\displaystyle X_n + Y_n$ converges to $\displaystyle X$ in distribution.

Thanks again.

2. Hello,
Originally Posted by akolman
How do you prove this?

Suppose $\displaystyle X_n$ converges to $\displaystyle X$ in distribution and $\displaystyle Y_n$ converges in probability to $\displaystyle 0$. Show that $\displaystyle X_n + Y_n$ converges to $\displaystyle X$ in distribution.

Thanks again.
Convergence in probability implies convergence in distribution. From here, your problem is obvious

I can show you the proof... But it's pretty tedious I can link you to the French Wikipedia where there is a proof... Convergence de variables aléatoires - Wikipédia
where there is
Théorème — Xn converge vers X en probabilité $\displaystyle \Rightarrow$ Xn converge vers X en loi.
(Xn converges to X in probability $\displaystyle \Rightarrow$ Xn converges to X in distribution)

Then prove that if Xn converges to X in distribution, and Yn converges to 0 in distribution, then Xn+Yn converges to X in distribution.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I'll try to get you started for this problem, by using a similar method to the theorem's proof. You'll finish it because it looks very difficult to write down this in latex !

$\displaystyle X_n \to X$ in distribution means that $\displaystyle F_{X_n}(x) \to F_X(x)$, for any x where $\displaystyle F_X(x)$ is continuous. F denotes the cumulative distribution function.
This means that $\displaystyle \lim_{n \to \infty} \mathbb{P}(X_n \leqslant x)=\mathbb{P}(X \leqslant x)$
This can be translated as :
$\displaystyle \forall \epsilon, ~ \exists N_1,~ \forall n \geqslant N_1,~ |\mathbb{P}(X_n \leqslant x)-\mathbb{P}(X \leqslant x)|<\epsilon/2$

$\displaystyle Y_n \to 0$ in probability means that $\displaystyle \forall \delta >0,~\lim_{n \to \infty} \mathbb{P}(|Y_n-0|> \delta)=\lim_{n \to \infty} \mathbb{P}(|Y_n|> \delta)=0$.
This can be translated as :
$\displaystyle \forall \epsilon,~ \exists N_2,~ \forall n \geqslant N_2,~ \mathbb{P}(|Y_n|> \delta)<\epsilon/2$

Now you want to show that $\displaystyle \boxed{\lim_{n \to \infty} \mathbb{P}(X_n+Y_n \leqslant x)=\mathbb{P}(X \leqslant x)}$

Write that $\displaystyle \mathbb{P}(X_n+Y_n \leqslant x)=\mathbb{P}(X_n+Y_n \leqslant x ~,~ |Y_n|>\delta )+\mathbb{P}(X_n+Y_n \leqslant x ~,~ |Y_n|<\delta )$, for any given $\displaystyle \delta >0$

Since $\displaystyle \{|Y_n|>\delta\} \supset \{X_n+Y_n \leqslant x\ ~,~ |Y_n|>\delta \}$ (the comma represents an intersection of events), we can say that
$\displaystyle \mathbb{P}(X_n+Y_n \leqslant x ~,~ |Y_n|>\delta ) \leqslant \mathbb{P}(|Y_n|> \delta)$
Similarly (this will need a little bit of thinking), $\displaystyle \{X_n \leqslant x+\delta\} \supset \{X_n+Y_n \leqslant x ~,~ |Y_n|<\delta \}$

Hence we get :
$\displaystyle \mathbb{P}(X_n+Y_n \leqslant x) \leqslant \mathbb{P}(|Y_n|> \delta)+\mathbb{P}(X_n \leqslant x+\delta)$

Gaaah... At least this can get you started... I'm really struggling in thinking on a computer and have no much time left I don't even know if this is useful, because you can use that theorem above...
The steps after that are roughly : get an inequality in the form $\displaystyle \mathbb{P}(X_n+Y_n \leqslant x) \leqslant F_X(x)+\epsilon$

And then make similar steps to get an inequality in the form $\displaystyle \mathbb{P}(X_n+Y_n \leqslant x) \geqslant F_X(x)-\epsilon$

And this would mean that $\displaystyle \mathbb{P}(X_n+Y_n \leqslant x) \longrightarrow F_X(x)$, which is what you want.

3. Originally Posted by Moo
Hello,

Convergence in probability implies convergence in distribution. From here, your problem is obvious
It may become simpler, but I would not say obvious... ?

Your proof is nice (there are just typos with the inclusion of sets, they are in the wrong direction). There are numerous equivalent ways to define convergence in distribution, and each one gives a proof. The proof I knew uses characteristic functions (there is an important theorem by Lévy saying that convergence in distribution to $\displaystyle X$ is equivalent to pointwise convergence of the characteristic functions to $\displaystyle \Phi_X$). It is shorter but less elementary due to this theorem.

4. Originally Posted by Laurent
It may become simpler, but I would not say obvious... ?

Your proof is nice (there are just typos with the inclusion of sets, they are in the wrong direction). There are numerous equivalent ways to define convergence in distribution, and each one gives a proof. The proof I knew uses characteristic functions (there is an important theorem by Lévy saying that convergence in distribution to $\displaystyle X$ is equivalent to pointwise convergence of the characteristic functions to $\displaystyle \Phi_X$). It is shorter but less elementary due to this theorem.
Yes indeed.... careless typos ! (which have been corrected now, thanks to Jhevon)

I realise that in the steps that would follow what I did, there would be some problems because of absolute values... I don't know if it is possible to solve them, but it makes the proof less simple

Maybe the best way is to use this theorem that the convergence in probability implies the convergence in distribution, which lets us say that Yn converges to 0 in distribution. And then prove that Xn+Yn converges to X in distribution.
Get yourself inspired from what was done above If you can't, well, just post your working here and we'll help you

5. Originally Posted by Moo
I realise that in the steps that would follow what I did, there would be some problems because of absolute values... I don't know if it is possible to solve them, but it makes the proof less simple
There is not really a problem ; for the lower bound, you can write

$\displaystyle P(X_n+Y_n\leq x)\geq P(X_n\leq x-\delta,|Y_n|\leq\delta)$ $\displaystyle =P(X_n\leq x-\delta)-P(X_n\leq x-\delta, |Y_n|>\delta)\geq P(X_n\leq x-\delta)-P(|Y_n|>\delta).$

Then for both the lower and the upper bound, let $\displaystyle \varepsilon>0$ and use the fact that $\displaystyle F_X$ is continuous at $\displaystyle x$ (we prove convergence only at those points $\displaystyle x$) to choose $\displaystyle \delta$ such that $\displaystyle |F_X(x)-F_X(x\pm\delta)|\leq\varepsilon$. In addition, for $\displaystyle n$ large enough, $\displaystyle F_{X_n}(x\pm\delta)$ is close from $\displaystyle F_X(x\pm\delta)$ (it requires that $\displaystyle F_X$ is continuous at $\displaystyle x\pm\delta$ to say that, so we should choose $\displaystyle \delta$ accordingly; there only countably many bad choices so it's OK), and $\displaystyle P(|Y_n|>\delta)$ is small. That should do.

6. Thanks a lot for the help, I think I get how to set the inequalities and take the limits.

7. Hello again,

So we dealt with that generalized problem (when $\displaystyle Y_n \to c$, any constant) in today's lecture..

It is known as Slutsky's theorem

We've written a proof, but it is around 1 page and a half long... And I couldn't find a proper one in the first page of google (maybe I didn't look at it very well...)
What Laurent said in #5 is exactly in the proof we've written and it looks like that what I did in #2 was not that incorrect...
If you want what we've done, you can take an appointment with me so that I copy it in latex for you lol

8. Thank you very much Moo. Actually, my professor handed me the solution for this homework problem. Your hints and suggestions were really helpful .