you roll 2 dice over and over again. what is the chance that the sum of the 2 dice comes up 4 before it comes up to 7?
can someone please explain how do we think of it first then help me solve it.
I baby-talked my way through this one . . .You roll 2 dice over and over again.
What is the probability that the sum of the 2 dice comes up 4 before it comes up to 7?
We have: .
I considered the probability that the first 4 turned up on the roll
. . and that no 7 had appeared .
[After that, we don't care what happens.
Eventually a 7 will show up, but 4 has already appeared first.]
We have: .
4 on 2nd roll, Other-Four: .
4 on 3rd roll, Other-Other-Four: .
. . and so on . . .
The probabiity that 4 appears before 7 is a geometric series:
. . .
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Now that I see the answer, an interesting approach occurs to me.
. . (Ain't hindsight a wunnerful thing?)
Since , we see that a 7 is twice as likely as a 4.
Ignoring the Other values, it is clear that a 4 appears of the time
. . and 7 appears of the time.
Is this a path to a valid solution?
It goes like: we perform independent dice rolls until the sum is either 4 or 7. As a consequence, at the time when this happens, the distribution of the dice roll is the conditional distribution given that the sum is either 4 or 7.
We deduce that the probability that 4 appears before 7 is , where stands for the sum of two independent dice.
The good thing with this solution is that it involves less computation and that the result is better understood. The bad thing is that I didn't prove the fact about conditional distribution, and if I did then my proof wouldn't look shorter and more elementary than Soroban's . It would actually go along the exact same computation.
This property is an important one to know (that's why I took for granted): one way to come up with a conditional distribution is to perform repeatedly (and independently) an experiment until the condition is fulfilled.