# Thread: Almost sure convergence & convergence in probability

1. ## Almost sure convergence & convergence in probability

"Almost sure convergence" always implies "convergence in probability", but the converse is NOT true.

Thus, there exists a sequence of random variables Y_n such that Y_n->0 in probability, but Y_n does not converge to 0 almost surely.
I think this is possible if the Y's are independent, but still I can't think of an concrete example. What is of example of this happening?

Any help is appreciated!

[note: also under discussion in talk stats forum]

2. Hello,

Consider $(X_n)_{n\geq 1}$ a sequence of independent rv's such that :
$P(X_n=0)=p_n$ and $P(X_n=1)=1-p_n$

It is easy to prove that the sequence converges in probability to 0 if and only if $\lim_{n\to\infty} 1-p_n=0$

And thanks to Borel-Cantelli's lemma (and its reciprocal), the sequence converges almost surely to 0 if and only if $\sum_{n=1}^\infty 1-p_n<\infty$

There are many possibilities for $p_n$ such that it converges in probability and not almost surely. For example $p_n=1-\frac 1n$

3. A nice example is this one, take $([0,1],\mathcal{B},\lambda)$ where the measure is the Lebesgue measure. The sequence is
$
$

and so on.

Why does it converge in probability to 0? Because the size of the sets if decreasing by 1/n, which converges to 0.

Why doesn't it converge a.s.? Take the set x in [0,1], then for any N you give me I can find an n>N such that $x \in X_n$.

4. I have no background about measure theory and Lebesgue integral related.

Are there simpler counterexamples?

5. Originally Posted by Moo
Hello,

Consider $(X_n)_{n\geq 1}$ a sequence of independent rv's such that :
$P(X_n=0)=p_n$ and $P(X_n=1)=1-p_n$

It is easy to prove that the sequence converges in probability to 0 if and only if $\lim_{n\to\infty} 1-p_n=0$

And thanks to Borel-Cantelli's lemma (and its reciprocal), the sequence converges almost surely to 0 if and only if $\sum_{n=1}^\infty 1-p_n<\infty$

There are many possibilities for $p_n$ such that it converges in probability and not almost surely. For example $p_n=1-\frac 1n$
1) Definition: Xn converges to X in probability if for all epsilon > 0, Pr(|Xn-X|>epsilon) -> 0.
But why does yoursequence converge in probability to 0 if and only if lim(1-p_n)=0? I don't see why this is true...

2) Borel Cantelli Lemma says that:
Let A1,A2,A3,... be events.
(i) if ∑P(An)<∞, then P(An io)=0
(ii) if the A's are independent, and ∑P(An)=∞, then P(An io)=1
where P(An io) stands for the probability that an infinite number of the A's occurs.

You said that we can prove that the sequence Xn converges "almost surely" to 0 using this lemma, but I don't see where they talk about "convergence" in the lemma. Maybe the lemma you're using is of a different form? Could you please explain why almost sure convergnece follows directly from the lemma for your example?

3) Why is the assumption of independence in your example necessary?

I really want to understand this.
Your help is very much appreciated!

6. Originally Posted by kingwinner
1) Definition: Xn converges to X in probability if for all epsilon > 0, Pr(|Xn-X|>epsilon) -> 0.
But why does yoursequence converge in probability to 0 if and only if lim(1-p_n)=0? I don't see why this is true...
Because in her example there are two states, 0 and 1, so $\mathbb{P}(X_n>\epsilon)=\mathbb{P}(X_n=1)$. (She claims they converge in probability to 0)
2) Borel Cantelli Lemma says that:
Let A1,A2,A3,... be events.
(i) if ∑P(An)<∞, then P(An io)=0
(ii) if the A's are independent, and ∑P(An)=∞, then P(An io)=1
where P(An io) stands for the probability that an infinite number of the A's occurs.

You said that we can prove that the sequence Xn converges "almost surely" to 0 using this lemma, but I don't see where they talk about "convergence" in the lemma. Maybe the lemma you're using is of a different form? Could you please explain why almost sure convergnece follows directly from the lemma for your example?
Aha, it seems that you haven't really used this lemma, this will be a fun fact for you. It is almost everywhere used for proving a.s. The term i.o. means limsup, i.e. $\mathbb{P})(X_n=1\mbox{ i.o.})=\mathbb{P}(\forall N \in \mathbb{N}, \exists n>N \mbox{ s.t. } X_n=1)$. What "infinitely often" means is exactly what it says, the event happens infinitely often. If an event does not happen infinitely often a.s. then it must stop happening after a finite N, which is precisely what convergence is.
3) Why is the assumption of independence in your example necessary?

I really want to understand this.
Your help is very much appreciated!
To use the other Borel-Cantelli Lemma, otherwise she could only conclude the "if". This would mean that she couldn't really use the lemma to show that the a.s. non-convergence of the sequence.

7. 1) Thanks, this makes perfect sense now.

2) I'm sorry, I still don't get how we can use Borel-Cantelli's lemma to prove that the exmaple in Moo's post does NOT converge to 0 almost surely. I don't see the connection. Can someone please outline the chain of reasoning/steps to establish that proof? In particular, I am confused about the statement: "the sequence converges almost surely to 0 if and only if $\sum_{n=1}^\infty 1-p_n<\infty$". Why?

3) So to prove that it does NOT converge to 0 almost surely, are we using part (i) or part (ii) of Borel Cantelli Lemma?? Part (i) does not require independence.

Borel Cantelli Lemma:
(i) if ∑P(An)<∞, then P(An io)=0
(ii) if the A's are independent, and ∑P(An)=∞, then P(An io)=1

Any help is much appreciated!

8. Okay, have a look... :

$X_n$ converges almost surely to 0 iff $P(\lim_{n\to\infty} X_n=0)=1$

But $\lim_{n\to\infty} X_n=0$ can be written "with words" : $\forall \epsilon >0,\exists N,\forall n>N, |X_n-0|<\epsilon$

Doesn't it look like the explanation of $\liminf \{|X_n|<\epsilon\}$ ?

In fact, $X_n \to 0 \text{ a.s.} \Leftrightarrow \forall \epsilon>0,P(\liminf\{|X_n|<\epsilon\})=1 \Leftrightarrow \forall \epsilon>0,P(\limsup\{|X_n|>\epsilon\})=0$

Meditate that, in relation with Borel-Cantelli's lemma

The rest has been explained by Focus I think ^^

9. Hi, all this still seems to be very confusing for me

Let X_n be a sequence of independent random variables such that
P(X_n=0)=1-1/n and P(X_n=1)=1/n

So to prove that Xn does not converge to 0 almost surely, I think we have to use part (ii) of Borel-Cantelli Lemma (An's independent & ∑P(An)=∞ => P(An io)=1)

By part (ii) of B-C lemma, since ∑ P(X_n=1)= ∑1/n = ∞, this implies that P(X_n=1 io)=1, but WHY does this imply that Xn does not converge almost surely to 0? I don't understand this. Does it converge almost surely to any other value?

Can anyone provide more help on this?
Thank you!

10. P(X_n=1 io)=1
This means that almost surely, there's an infinity of indices n such that X_n=1. Which is contradictory with the fact that X_n converges almost surely to 0, because if it were, there would be an N such that for any n>N, |X_n-0|< epsilon.
Since X_n only takes values into {0,1}, this means that for any n>N, X_n would be equal to 0.

In other words, if X_n converges almost surely to 0, it would mean that there only are a finite number of X_n such that X_n=1 (since from some rank N, everything is equal to 0)

Which is clearly not the case here..

If you studied the convergence to 1, you would also have contradictions