I am trying to understand a proof that if is fundamental in probability, then there exists a subsequence and a random variable such that the subsequence converges with probability 1 to .

The proof begins with the construction, by indices, of a subsequence where and is the smallest index such that where . This implies that .

The next line goes: Hence, with probability 1.

**Can someone explain that step to me please?**

The proof goes on to use that last line to construct a set N of things for which the sum is infinite and then defines the limit r.v. to be 0 when things are in N, and when things are not in N.

I could also use a little explanation in understanding why this r.v. was so defined, because the subsequence constructed in the proof is supposed to converge with probability one to this random variable but I don't understand whats going on well enough to see it.

**Also, I've been looking at course notes from the past couple of weeks I've missed due to illness and I see a statement (without proof, though I think I can come up with one) that convergence with probability 1 to X is equivalent to the sequence being fundamental in probability. Wouldn't this trivially supercede the above proof? I mean, assuming the sequence is fundamental in probability, then this statement says that the entire sequence, which is trivially a subsequence of itself, converges with probability 1?**