How do I find the mle for r1 and r2 if:

p(x1,x2) = 1 - r1^x1 * r2^x2 ,

r1 <1, r2 < 1

given that x1 and x2 can be either 1 or 0 (i.e. it can be either (0,0), (0,1), (1,1) or (1,0) ?

Any help will be appreciated.

Printable View

- Feb 23rd 2009, 09:18 AMKitanoMaximum likelihood
How do I find the mle for r1 and r2 if:

p(x1,x2) = 1 - r1^x1 * r2^x2 ,

r1 <1, r2 < 1

given that x1 and x2 can be either 1 or 0 (i.e. it can be either (0,0), (0,1), (1,1) or (1,0) ?

Any help will be appreciated. - Feb 23rd 2009, 10:30 PMmatheagle
I don't understand this distribution.

Are you saying that...

$\displaystyle P(X_1=x_1,X_2=x_2)=1-r_1^{x_1}r_2^{x_2}$

then...

$\displaystyle P(X_1=0,X_2=0)=0$

$\displaystyle P(X_1=1,X_2=0)=1-r_1$

$\displaystyle P(X_1=0,X_2=1)=1-r_2$

and

$\displaystyle P(X_1=1,X_2=1)=1-r_1r_2$

which doesn't sum to one.

I'm lost. - Feb 24th 2009, 12:12 PMKitano
Here is the full explanation:

There are two risk factors that may cause an accident to happen. A risk factor $\displaystyle x_i$ is either present (x = 1) or absent (x = 0).

We have a function that specifies probability p(x1, x2) of an accident, but not the outcome in every concrete case.

We assume that each risk factor $\displaystyle x_i$ reduces the probability of a good outcome (no accident) by some factor $\displaystyle r_i < 1$. In other words, the unknown function

p is of the form $\displaystyle p(x_1, x_2) = 1 - r_1^{x_1} r_2^{x_2}$, so we have to learn parameters $\displaystyle r_1, r_2$.

How would you compute the ML hypothesis from the data, under the given assumption on p? - Feb 24th 2009, 03:04 PMmatheagle
Once again

$\displaystyle P(X_1=0,X_2=0)=0$

$\displaystyle P(X_1=1,X_2=0)=1-r_1$

$\displaystyle P(X_1=0,X_2=1)=1-r_2$

and

$\displaystyle P(X_1=1,X_2=1)=1-r_1r_2$

doesn't sum to one.

Something is off here.

I can differentiate and estimate the two r's, but that seems to be meaningless here.