Quote:

Originally Posted by

**Media_Man** Now here is the big question: Picking an element at random, what is the *probability* that it converges? What *portion* of sequences of natural numbers are "large sets" ($\displaystyle \sigma$ diverges) vs. "small sets" ($\displaystyle \sigma$ converges)? And can we get any kind of handle of a line that can be drawn between them?

Here's my answer about the last part.

There are natural probability measures on $\displaystyle \mathcal{P}(\mathbb{N})=\{0,1\}^{\mathbb{N}}$ (with product $\displaystyle \sigma$-algebra), namely the distributions that consist in picking each number independently with some probability $\displaystyle p\in(0,1)$. This gives random infinite subsets of $\displaystyle \mathbb{N}$ (that have asymptotic density $\displaystyle p$). Moreover, every natural number plays the same role (the measure is shift-invariant), so that these measures appear to be analogs of a "uniform measure". Choosing $\displaystyle p=1/2$ would additionally give a symmetry between the random subset and its complement (they would be distributed alike), but I don't need this assumption in the following.

In other words, let $\displaystyle p\in(0,1)$ and let $\displaystyle (X_n)_{n\geq 0}$ be independent random variables distributed according to $\displaystyle P(X_n=1)=p$ and $\displaystyle P(X_n=0)=1-p$. The random subset would be $\displaystyle A=\{n\in\mathbb{N}\ \mid\ X_n=1\}$. And the question is:

What is the probability that the sum $\displaystyle S=\sum_{n=1}^\infty \frac{X_n}{n}$ is finite?

The answer is..., wait for it,... zero. Disappointingly. But not surprisingly if you know Kolmogorov's 0-1 law, which tells that the answer has to be either 0 or 1.

I've come up with various proofs but no fully elementary one. One possibility is to use Paley-Zygmund inequality (this is an easy one, don't get scared by the name!) for partial sums to get that, taking a limit, $\displaystyle P(S\geq\frac{1}{2}\mathbb{E}(S))>0$ which, given Kolmogorov's 0-1 law, leads to the conclusion since $\displaystyle \mathbb{E}[S]=\sum_{n=1}^\infty \frac{p}{n}=\infty$.

Another possibility (without K's 0-1 law) is to write $\displaystyle Y_k=\sum_{n=k^2+1}^{(k+1)^2}X_n$. We notice that almost-surely $\displaystyle Y_k\sim_{k\to\infty} (2k+1)p$ because of the law of large numbers (Not quite: in fact there is a subtelty, but nevermind, it can be made to work). Then

$\displaystyle S_{k^2}=\sum_{i=0}^{k-1}\sum_{n=i^2+1}^{(i+1)^2}\frac{X_n}{n}$ $\displaystyle \geq\sum_{i=0}^{k-1}\sum_{n=i^2+1}^{(i+1)^2}\frac{X_n}{(i+1)^2}=\sum _{i=0}^{k-1}\frac{Y_i}{(i+1)^2}$,

and almost-surely $\displaystyle \frac{Y_i}{(i+1)^2}\sim_{i\to\infty} \frac{(2i+1)p}{(i+1)^2}\sim_{i\to\infty} \frac{2p}{i}$, so that the right-hand side sum diverges almost-surely as $\displaystyle k\to\infty$.

As a conclusion: for almost-every subset $\displaystyle A$ of $\displaystyle \mathbb{N}$ (according to the previous measures), $\displaystyle \sigma_A=\infty$. The second proof (almost) only uses the fact that a random set has a positive asymptotic density.

--

About this "frontier" thing, here is something you might like (but maybe you know it already): for any $\displaystyle k\geq 1$, the series

$\displaystyle \sum_n\frac{1}{n\log n \log\log n\cdots \log\log\cdots\log n}$

(with successively $\displaystyle 1, 2, 3,\ldots, k$ nested logs) diverges, while for any $\displaystyle \varepsilon>0$, for any $\displaystyle k\geq 1$, the series

$\displaystyle \sum_n\frac{1}{n\log n\log\log n\cdots (\log\log\cdots\log n)^{1+\varepsilon}}$

(only the last function is raised to the power $\displaystyle 1+\varepsilon$) converges. For a given $\displaystyle \varepsilon>0$, the higher $\displaystyle k$ is, the slower the converging series decays. This gives an intuition (not a theorem) of how fast a decreasing sequence should at least decay for making a convergent series.