# fundamental in probability & convergence with probability 1

• Feb 22nd 2010, 07:52 AM
cribby
fundamental in probability & convergence with probability 1
I am trying to understand a proof that if $\displaystyle \{X_n\}$ is fundamental in probability, then there exists a subsequence and a random variable $\displaystyle X$ such that the subsequence converges with probability 1 to $\displaystyle X$.

The proof begins with the construction, by indices, of a subsequence where $\displaystyle n_1 = 1$ and $\displaystyle n_{i+1}$ is the smallest index such that $\displaystyle n_{i+1} > n_i$ where $\displaystyle P(\{ \vert X_m - X_n \vert \geq 2^{-i}\}) < 2^{-i}$. This implies that $\displaystyle \sum_{k=1}^{\infty} P(\{\vert X_{n_{k+1}} - X_{n_k}\vert \geq 2^{-k}\}) < \sum_1^{\infty} 2^{-k}< \infty$.

The next line goes: Hence, $\displaystyle \sum_1^{\infty} \vert X_{n_{k+1}} - X_{n_k} \vert < \infty$ with probability 1.

**Can someone explain that step to me please?**

The proof goes on to use that last line to construct a set N of things for which the sum is infinite and then defines the limit r.v. $\displaystyle X$ to be 0 when things are in N, and $\displaystyle X_{n_1} + \sum_1^{\infty}(X_{n_{k+1}} - X_{n_k})$ when things are not in N.

I could also use a little explanation in understanding why this r.v. was so defined, because the subsequence constructed in the proof is supposed to converge with probability one to this random variable but I don't understand whats going on well enough to see it.

**Also, I've been looking at course notes from the past couple of weeks I've missed due to illness and I see a statement (without proof, though I think I can come up with one) that convergence with probability 1 to X is equivalent to the sequence being fundamental in probability. Wouldn't this trivially supercede the above proof? I mean, assuming the sequence is fundamental in probability, then this statement says that the entire sequence, which is trivially a subsequence of itself, converges with probability 1?**
• Feb 22nd 2010, 12:34 PM
Laurent
Quote:

Originally Posted by cribby
I am trying to understand a proof that if $\displaystyle \{X_n\}$ is fundamental in probability, then there exists a subsequence and a random variable $\displaystyle X$ such that the subsequence converges with probability 1 to $\displaystyle X$.

The proof begins with the construction, by indices, of a subsequence where $\displaystyle n_1 = 1$ and $\displaystyle n_{i+1}$ is the smallest index such that $\displaystyle n_{i+1} > n_i$ where $\displaystyle P(\{ \vert X_m - X_n \vert \geq 2^{-i}\}) < 2^{-i}$. This implies that $\displaystyle \sum_{k=1}^{\infty} P(\{\vert X_{n_{k+1}} - X_{n_k}\vert \geq 2^{-k}\}) < \sum_1^{\infty} 2^{-k}< \infty$.

The next line goes: Hence, $\displaystyle \sum_1^{\infty} \vert X_{n_{k+1}} - X_{n_k} \vert < \infty$ with probability 1.

**Can someone explain that step to me please?**

By Borel-Cantelli lemma, almost-surely there is $\displaystyle k_0\geq 1$ such that, for all $\displaystyle k\geq k_0$, $\displaystyle |X_{n_{k+1}}-X_{n_k}|\leq 2^{-k}$, hence $\displaystyle \sum_1^\infty |X_{n_{k+1}}-X_{n_k}|\leq \sum_{k=1}^{k_0-1}|X_{n_{k+1}}-X_{n_k}|+\sum_{k=k_0}^\infty \frac{1}{2^k}<\infty$.

Quote:

The proof goes on to use that last line to construct a set N of things for which the sum is infinite and then defines the limit r.v. $\displaystyle X$ to be 0 when things are in N, and $\displaystyle X_{n_1} + \sum_1^{\infty}(X_{n_{k+1}} - X_{n_k})$ when things are not in N.

I could also use a little explanation in understanding why this r.v. was so defined, because the subsequence constructed in the proof is supposed to converge with probability one to this random variable but I don't understand whats going on well enough to see it.
It is correct that the value of $\displaystyle X$ on $\displaystyle N^c$ is arbitrary; they choose it to be 0. It couldn't be defined as the limit of the series since it may not converge.

In order to be rigorous, it is important to define $\displaystyle X$ everywhere (as for any r.v.), even though only the values on N really matter.

Quote:

convergence with probability 1 to X is equivalent to the sequence being fundamental in probability.
This is wrong. Being "fundamental in probability" (a not so common notion, btw) is equivalent to converging in probability to some random variable.
• Feb 22nd 2010, 08:15 PM
cribby
Quote:

Originally Posted by Laurent
By Borel-Cantelli lemma, almost-surely there is $\displaystyle k_0\geq 1$ such that, for all $\displaystyle k\geq k_0$, $\displaystyle |X_{n_{k+1}}-X_{n_k}|\leq 2^{-k}$, hence $\displaystyle \sum_1^\infty |X_{n_{k+1}}-X_{n_k}|\leq \sum_{k=1}^{k_0-1}|X_{n_{k+1}}-X_{n_k}|+\sum_{k=k_0}^\infty \frac{1}{2^k}<\infty$.

Awesome--thanks!

Quote:

It is correct that the value of $\displaystyle X$ on $\displaystyle N^c$ is arbitrary; they choose it to be 0. It couldn't be defined as the limit of the series since it may not converge.

In order to be rigorous, it is important to define $\displaystyle X$ everywhere (as for any r.v.), even though only the values on N really matter.
I think I miscommunicated my difficulty with this part of the proof. What would help me get up to speed here is a genetic account of the nonzero portion of the limit r.v. I get that everything has to go somewhere and so we send the stuff we're really not interested in to some garbage value, but I'm not clear why the nonzero portion was defined the way it was. I guess what I'm looking for is some heuristic enlightenment--how, by what brain crevasses, would one generate this?

Quote:

This is wrong. Being "fundamental in probability" (a not so common notion, btw) is equivalent to converging in probability to some random variable.
Great. (Insert rant on math faculty who insist on making outdated/marginal concepts core course material here.)

I found another possible correction to the statement in the course notes--that "fundamental almost surely" is equivalent to convergence almost surely. (Insert rant on math faculty who after more than 2 decades of teaching still can't control the flow of a lecture or their boardwork here.)

Thanks again, I really appreciate it!!
• Feb 23rd 2010, 09:58 AM
Laurent
Quote:

Originally Posted by cribby
I think I miscommunicated my difficulty with this part of the proof. What would help me get up to speed here is a genetic account of the nonzero portion of the limit r.v. I get that everything has to go somewhere and so we send the stuff we're really not interested in to some garbage value, but I'm not clear why the nonzero portion was defined the way it was. I guess what I'm looking for is some heuristic enlightenment--how, by what brain crevasses, would one generate this?

Oh, I see... :-s
There is a very good reason for this definition: in wide generality, for a sequence $\displaystyle (a_n)_n$, we have $\displaystyle a_n=a_1+(a_2-a_1)+\cdots+(a_n-a_{n-1})$ ("telescoping sum": all terms but one cancel), hence

($\displaystyle (a_n)_n$ converges toward $\displaystyle a$) iff (the series $\displaystyle \sum_k (a_{k+1}-a_k)$ converges and $\displaystyle a_1+\sum_{k=1}^\infty (a_{k+1}-a_k)=a$).

The series is just a convenient way to rewrite the limit since we can use one of the numerous convergence tests (here, the comparison test). Another way of writing the proof would be: "(....) thus almost-surely the series $\displaystyle \sum_k (X_{n_k}-X_{n_{k-1}})$ converges (since it converges absolutely), which is to say that almost-surely the sequence $\displaystyle (X_{n_k})_k$ converge (since $\displaystyle X_{n_k}=X_{n_1}+\sum_{i=2}^k (X_{n_i}-X_{n_{i-1}})$ for all $\displaystyle k$). Let us define $\displaystyle X$ to be the limit of this sequence when it converges, and 0 otherwise. "

Quote:

I found another possible correction to the statement in the course notes--that "fundamental almost surely" is equivalent to convergence almost surely.
That indeed would be correct since it is just another way of saying that a sequence converges if (and only if) it is Cauchy.