# Thread: Weird Probability (but a bit simpler)

1. ## Weird Probability (but a bit simpler)

Hello all,

I posted earlier in the day (please forgive this), but I made the problem a bit simpler so I thought I would give it another go. Basically, I cannot figure this out for the life of me, and I am starting to think it may not be solvable (but I hope you people who are much smarter than me can shed some light).

So I want to know the expected value of a prize y (I am comparing this with a lottery in which you get a prize for certain). At time zero, you have a probability (1-x) of getting y, and a probability of x of getting 0. If you got the prize y at time zero, you have a new probability (1-2x) of getting the prize at time 1 and a probability 2x of not getting it.

So the tricky bit is, if you did not get the prize at time zero, the probability of getting the prize at time 1 remains the same (that is, 1-x). So this goes on for an infinite number of periods... The probability of getting the prize goes down if you got the prize yesterday, but it stays the same if you didn't.

I hope this makes sense, and really any help (even to put me out of my misery) will be appreciated more than you probably know.

Sincerely,

Nick

2. This makes sense, except for exactly what "y" stands for: it looks like it is the prize at any given time (so it would be a fixed amount, like y=$5), and yet you then ask for the expected value of y, like it would be the total amount won, or something alike (a random variable)... Furthermore, what happens after 1-2x? Does it become 1-3x, etc., but then this is eventually negative?! What's the pattern here? 3. Originally Posted by Laurent This makes sense, except for exactly what "y" stands for: it looks like it is the prize at any given time (so it would be a fixed amount, like y=$5), and yet you then ask for expected value of y, like it would be the total amount won, or something alike...

By the way, conditional on my understanding your situation well, I would say that if the game goes forever, the player wins an infinite number of times, even though the probability decreases (it doesn't decrease fast enough so as to make the number of wins finite).
Hi Laurent,

Thanks for your response. Yes y is just a fixed amount (I shouldn't have asked for the expected value which is clearly just y). The idea is, you have a choice of playing two games. One is described as above where you win y with a probability that stochastically declines with time, and one where you win another prize (valued less than y) every period for certain. I want to see when you should pick the first game in terms of the aforementioned parameters. Payoffs each period are also discounted by a constant factor so this will converge to zero as time approaches infinity (I should have said that). And yes 1-2x will become 1-3x which will become negative at some point, so I guess the game will just end when that reaches zero (So i guess rather than an infinite number of periods, you have 1/x periods where x is small). I hope thats a bit clearer,

Nick

4. To sum it up:

Assume to make things easier that $\displaystyle x=\frac{1}{N}$ where $\displaystyle N$ is an integer.

At time $\displaystyle t$, if the prize has been won $\displaystyle k$ times (with $\displaystyle 0\leq k< N$) during the past, then the probability of winning again is $\displaystyle 1-\frac{k}{N}$. After a total of $\displaystyle N$ wins, the game ends.
Furthermore, the payoff at time $\displaystyle t$ (if any) is $\displaystyle y c^t$, where $\displaystyle 0<c<1$.

Is that it?

--
To avoid problem with probabilities getting negative, you could say: if the prize has been won $\displaystyle k$ times (with $\displaystyle k\geq 0$) during the past, then the probability of winning again is $\displaystyle \frac{1}{2+k}$ (for instance, or $\displaystyle \frac{1}{2^k}$,...). So that it makes sense to consider an infinite game.

5. Yes that is very helpful, I am still stuck on where to go next though,

Nick

6. Originally Posted by salohcin
Yes that is very helpful, I am still stuck on where to go next though,
Then, if the payoff were constant, the total win would be $\displaystyle Ny$ because eventually the player wins $\displaystyle N$ times before the game ends.

Taking the geometric decrease of the payoff into account, the total win is $\displaystyle S=(c^{T_1}+\cdots +c^{T_N})y$ where $\displaystyle T_1,\ldots,T_N$ are the winning times. Note that $\displaystyle T_{k+1}-T_k$ is a geometric random variable of parameter $\displaystyle 1-\frac{k}{N}$ (after k wins, the next win has probability $\displaystyle 1-\frac{k}{N}$ to happen), and these random variables are independent.

If $\displaystyle T$ is a geometric r.v. with parameter $\displaystyle p$, then one computes $\displaystyle E[c^T]=\frac{cp}{1-c(1-p)}$. Using this, we get (up to possible mistakes)

$\displaystyle E[S]=y \Big(\frac{c(1-\frac{1}{N})}{1-\frac{c}{N}}+\frac{c(1-\frac{1}{N})}{1-\frac{c}{N}}\frac{c(1-\frac{2}{N})}{1-\frac{2c}{N}}+\cdots$ $\displaystyle +\frac{c(1-\frac{1}{N})}{1-\frac{c}{N}}\cdots \frac{c(1-\frac{N-1}{N})}{1-\frac{(N-1)c}{N}}\Big)$.

This is not very nice, but I'm not sure this can be turned to something nicer...

7. Originally Posted by Laurent
Then, if the payoff were constant, the total win would be $\displaystyle Ny$ because eventually the player wins $\displaystyle N$ times before the game ends.

Taking the geometric decrease of the payoff into account, the total win is $\displaystyle S=(c^{T_1}+\cdots +c^{T_N})y$ where $\displaystyle T_1,\ldots,T_N$ are the winning times. Note that $\displaystyle T_{k+1}-T_k$ is a geometric random variable of parameter $\displaystyle 1-\frac{k}{N}$ (after k wins, the next win has probability $\displaystyle 1-\frac{k}{N}$ to happen), and these random variables are independent.

If $\displaystyle T$ is a geometric r.v. with parameter $\displaystyle p$, then one computes $\displaystyle E[c^T]=\frac{cp}{1-c(1-p)}$. Using this, we get (up to possible mistakes)

$\displaystyle E[S]=y \Big(\frac{c(1-\frac{1}{N})}{1-\frac{c}{N}}+\frac{c(1-\frac{1}{N})}{1-\frac{c}{N}}\frac{c(1-\frac{2}{N})}{1-\frac{2c}{N}}+\cdots$ $\displaystyle +\frac{c(1-\frac{1}{N})}{1-\frac{c}{N}}\cdots \frac{c(1-\frac{N-1}{N})}{1-\frac{(N-1)c}{N}}\Big)$.

This is not very nice, but I'm not sure this can be turned to something nicer...

Hi Laurent, thank you very much. Do you think I could try and approaching this as a Markov chain to simplify things? I don't know how that works because the transition probability (1-k/n) is also stochastic.

Nick

8. Originally Posted by salohcin
Hi Laurent, thank you very much. Do you think I could try and approaching this as a Markov chain to simplify things? I don't know how that works because the transition probability (1-k/n) is also stochastic.
It won't simplify the result, but yes, this is a Markov chain. States are $\displaystyle 0,1,\ldots,N$, denoting the number of previous wins. So starting state is 0. There are only transitions from state $\displaystyle k$ to itself (probability $\displaystyle \frac{k}{N}$) and to $\displaystyle k+1$ (probability $\displaystyle 1-\frac{k}{N}$).