I have 2 scenario:
A=The result of 1 coin tossed 1000 times.
B=The result of 1000 coins tossed 1 time each.
My question is:
Are we able to tell just from the characteristics of A and B, without actually knowing where they belong, and correctly identify them as A or B?
Simply put, are the 2 same or different?
Intuitively it seems, but it is not actually so.
This is according to William Feller's book An Intro to Probability.
The reason given:
time average for any individual game is not the same as the ensemble average at any given moment.
Time average DOES NOT obey Law of Large number, while the ensemble average obeys.
But I have not yet fully grasped the mathematical reasonings. Can anyone comment on this?
it's a fair point that whoever wins the first toss is more statistically likely to win overall after that point....but isn't this so in both cases? I don't see how it's different considering all coins are identical so both scenarios are indistinguishable. He does refer to it as jargon, i think codswollop would be a better word but then maybe it's true, i'd just love to see those arc sine laws
Ok. The arc sine laws are nothing but an approximation to the probability distribution of "last equalisation given n trials",
where equalisation means Accumulated No of heads = Accumulated No of tails.
It was found that the probability follows the arc sine curve, where the probability is highest at the start and end trials, but lowest in the middle.
The probability that the Accumulated No of heads lead the Accumulated No of tail also follow the arc sine law.
He asserts that these laws defy common sense, and that is due to to the fact that waiting time/equalisation probability distribution do not have bounded expectations, and thus do not follow the Law of Large Numbers.
On the other hand, the ensemble tossing of coins in one instant follow the LLN, and thus do not exhibit these symptoms.
Maybe he meant "in one instant", we are missing the dimension of time here. But that would be obvious. There is no need to add the words "paradoxical" or "startling" in his text, which he did.
That's my understanding. Please comment.
Throwing 1 coin 1000 times is in every respect a systematic random walk of the simplest kind.
I think Colby is right in that it is not time that is important. What is important is whether the trials are ordered in some way. That the 2 cases have the same properties is obvious because we can convert one to the other; if we flip all the coins in a line the number of heads and tails will be the same if we count them from left to right or all at once.
I'm not sure I understand all this stuff fully, but I think that the differences
Feller identifies are just the extra information obtained if some of the coins are counted. If we count the first 5 (for example) when the coins are ordered to be HHHHT, then the conclusions we can draw from that are different to the knowledge obtained from the coins flipped all at once ie. that there exists at least 4 heads and one tail.
This very phrasing led me to question if I had read the passages right at all!
But all in all, I enjoyed his book trememdously, not only as a mathematical text but also as a thriller.