# Math Help - Probability for a countable set?

1. ## Probability for a countable set?

What is the chance of selecting a particular number from, say, the set of integers? I know it's 0 for a set of reals and positive for a finite set, but what about a countable set?

I guess this would be a discrete uniform distribution over an infinite set. I know this might cause problems with standard definitions, but is there any way to make extensions or something to properly define this?

2. The short answer? No. You can't put anything like a uniform distribution over a countable set. In order for the probability of the full space to be less than infinity, you would need to assign probability 0 to all the singletons. But the space itself can be expressed as the union of all the singletons, each of which has probability 0, so by countable additivity the whole space would have probability 0, which is a contradiction.

As far as I know there isn't a canonical way of bending the rules so that you can get away with things like this. I suppose a Bayesian might put a unit mass on each singleton and use it as an improper distribution for some purpose. But it wouldn't be a bona fide probability distribution.

3. So is there no way to answer the question of "What's the chance of choosing a particular integer?" even though it's intuitive to think that chance of choosing one from infinitely many is 1/infinity = 0? (Which begs the question, is 1/uncountable "less than" 1/countable")

I'm wondering if there's way to extend functions or something to have a more general theory that covers this question, because it seems that probability theory is inherently incomplete if it can't answer such a simple question.

And for rationals also. Say you have a uniform distribution but only rational numbers are considered. Am I supposed to just approximate it with the continuous distribution since the rationals are densely ordered?

4. Another thing you could do, actually, is to think of asymptotic densities as being kind of probabilities. You don't get countable additivity with asymptotic density, but it makes a little bit of sense intuitively: if you pick a natural number at random, for instance, you would think that you would get an even one about half the time, which is what thinking of things in terms of asymptotic densities tells you to expect. The flip side of this is that the collection of sets for which the asymptotic density is defined isn't even a field.

5. Originally Posted by anomaly
And for rationals also. Say you have a uniform distribution but only rational numbers are considered. Am I supposed to just approximate it with the continuous distribution since the rationals are densely ordered?
Nope, you can't put a uniform distribution on the rationals. It's somewhat surprising, but it just doesn't work. You can't do it with point masses, and you can't do it with Lebesgue measure (which is what you use to construct the uniform distribution for the reals). It's not so much a flaw with probability theory as it is with the rationals.

6. So, going with asymptotic density, the chance would be 1/n, so at infinity (integers), it becomes 0? I see.

Originally Posted by theodds
Nope, you can't put a uniform distribution on the rationals. It's somewhat surprising, but it just doesn't work. You can't do it with point masses, and you can't do it with Lebesgue measure (which is what you use to construct the uniform distribution for the reals). It's not so much a flaw with probability theory as it is with the rationals.
But in real life your measurements will be rational, so if you have an experiment that deals entirely with rationals, wouldn't you just use the continuous probability, get its results, and apply that?

7. In real life, you don't really sample from continuous distributions, you just pretend that you do. Even if the underlying process really did result in real-valued output from a uniform distribution, you wouldn't get to measure the output in that case.

I would caution against thinking of asymptotic densities as representing probabilities too much, since when people say "probability" in mathematics they require a specific structure (see Probability Space). It kind of makes sense, but for a lot of reasons it also doesn't make sense: it isn't defined for a lot of sets (not bad), isn't countably additive (pretty bad), and even if it is defined for a couple of sets A and B it isn't necessarily defined for the union of A and B (quite bad). It's still a useful concept in number theory, but doesn't quite do all the things we would like for probability.

8. This reminds me of something brought up by James Robert Brown in his book Philosophy of Mathematics. He brings up Freiling's article about throwing darts at the real number line. The point of the argument is to use the ideas to argue against the continuum hypothesis (CH), but it does have relevance to what is being mentioned here (because the idea about random dart throwing involves probabilities that measure to zero, as mentioned elsewhere in this thread). I leave this for any interested readers to consider the ideas put forth.

9. Originally Posted by theodds
The short answer? No. You can't put anything like a uniform distribution over a countable set. In order for the probability of the full space to be less than infinity, you would need to assign probability 0 to all the singletons. But the space itself can be expressed as the union of all the singletons, each of which has probability 0, so by countable additivity the whole space would have probability 0, which is a contradiction.

As far as I know there isn't a canonical way of bending the rules so that you can get away with things like this. I suppose a Bayesian might put a unit mass on each singleton and use it as an improper distribution for some purpose. But it wouldn't be a bona fide probability distribution.
I must say that I am always uncomfortable when I see improper priors in Bayesian statistics and always go out of my way to avoid them myself.

CB

10. Originally Posted by anomaly
So, going with asymptotic density, the chance would be 1/n, so at infinity (integers), it becomes 0? I see.

But in real life your measurements will be rational, so if you have an experiment that deals entirely with rationals, wouldn't you just use the continuous probability, get its results, and apply that?
In real life your measurements are not rationals, a measurement is something more like an interval. Exactly how like an interval depends on the instrument.

CB