# Gambler's Ruin

• March 1st 2009, 12:49 PM
AfterShock
Gambler's Ruin
Greetings,

I am attempting to determine how the Gambler's Ruin equation is determined -- I attempted looking through Feller's book but to no avail. I believe it has to do with Markov Chains and would be indebted to someone if they could help me derive the equation.

Many thanks.

The equation is attached.

$P(i,N)=\ldots$

EDIT: Nevermind-- apparently there is an error with attachments.

The equation is as follows. We have P(i,N) which is the probability that you'll reach $N$ before going broke:

$P(i,N)=\frac{1-\left(\frac{q}{p}\right)^i}{1-\left(\frac{q}{p}\right)^N}\,\, \text{if}\,\, p \neq \frac{1}{2}$
$\text{or}~~~~~~~= \frac{i}{N}\,\, \text{if}\,\, p = \frac{1}{2}$

where $p$ is your chance of winning a bet, $q=1-p$ is chance of losing one bet, $i$ is current fortune in betting units, $N$ is your goal, also in betting units $N>i$.

So, if you have $10,000 and your goal is$25,000, assuming you always bet \$5,000, then i = 2, N = 5, and p,q are the respective chances of winning/losing.

I think that should be sufficient.
• March 1st 2009, 01:26 PM
pedrosorio
The way I understand your problem it is something like "determine the probability that the gambler will reach N before she/he goes bankrupt".

This problem could then be represented by a Markov Chain with N+1 states (from 0 betting units to N betting units). Once you reach states 0 or N you have probability=1 of remaining there. For other states you have p probability of going up, q probability of going down and 0 probability of going to any other state or remaining there.

This translates to a transition matrix with 0's on the main diagonal except for 0,0 and N,N which are 1's. The diagonal above the main one will be composed of p's (except the 0,1 which is 0) and the diagonal below will be composed of q's (except the N,N-1 which is 0).

Now all we have to do is multiply a transposed vector (the "initial-state vector") by this matrix. It will be a transposed vector running from position 0 to position N with all components equal to 0 except the ith one, which is equal to 1. If we multiply the resulting vector an infinite number of times by the matrix, the Nth component of the resulting vector will have the probability that our gambler wins.

That is my guess. Now you tell me how to find that limit (and prove that it exists) and your question will be answered. (Itwasntme)
• March 2nd 2009, 02:11 AM
Laurent
Read the previous post for the Markov chain description. The suggested trick with the infinite product of matrices is correct, but it will be quite tedious to do. The usual way is rather to get recurrence equations.

Notice $p(0,N)=0$ and $p(N,N)=1$. These are "boundary conditions". Moreover, for $1\leq i\leq N-1$, we have:

$p(i,N)= p\ p(i+1,N)+ q\ p(i-1,N)$.

You should:
a) justify why this is true,
b) justify why the boundary conditions and this equation define uniquely $p(i,N)$ for all $i$ (i.e. how you could deduce $p(i,N)$ from them)
c) check that the anwer you are given satifies both the boundary conditions and the previous induction equation. Since there's only one solution, this has to be it.

Or you can replace b) and c) by:
b') use the boundary conditions and the equation to compute $p(i,N)$ directly (without using the given answer).
But this is more delicate.