Originally Posted by

**Gazmann** Hello, I’m an undergraduate medical student (i.e, no mathematics education past high school), and I’m having difficulty understanding a concept in ET Jayne’s Probability Theory: The Logic of Science. On pp.63-67 2ed., he discusses a Bernoulli urn with N = 4 balls, M = 2 red ones (and 2 white, N-M), of which we must randomly draw n= 3. The balls are not replaced. (Let this proposition ≣ B.) He asks, how does knowledge that a red ball will be drawn on the second (R2) or third (R3) draw affect the probability of drawing a red ball on the first (R1)?

He reveals the surprising (and awesome!) revelation that P(R1 | R2 + R3,B) > P(R1 | R2 B). I understand his intuitive explanation for it, but not his formal. He summarises thusly:

“... when the fraction F = M/N of red balls is known, then the Bernoulli urn rule applies, and P(R1 | B) = F. When F is unknown, the probability for red is the expectation of F: P(R1 | B) = <F> ≣ E(F). If M and N are both unknown, the expectation is over the joint probability distribution for M and N.”

I tried calculating E(F), (as I assume the second and not the third scenario applies here), but arrived to an erroneous result. In the intuitive working, he shows P(R1 | R2 + R3,B) = (4/5) / 2, calling the numerator ‘effective M’ and the denominator is ‘N - 2’. In his formal explanation, he states that ‘effective M’ is the Expected value of M, E(M).

So, I don’t know have that fits into finding the Expected value of F. Basically, I don’t know where the N term fits into it all. Any help would be much appreciated; if I have been too unclear I will make screenshots of the pages.