1. [SOLVED] Unbiased estimator

Hi,

Just a quick question...

Suppose we have $\displaystyle (X_n)$ iid variables with a joint pdf $\displaystyle h(x,\theta)$ (product of the pdf...), unknown parameter $\displaystyle \theta$

We can write it in exponential form, and it's supposed regular.

Then, we've been told that an estimator for $\displaystyle \theta$ would be $\displaystyle \hat{\theta}$ such that $\displaystyle \frac{\partial \log(h(x,\hat{\theta}))}{\partial \theta}=0$

My question is : is $\displaystyle \hat{\theta}$ is an unbiased estimator ?

Another question... Why does $\displaystyle \mathbb{E}\left(\frac{\partial \log h(X,\theta)}{\partial \theta}\right)=0$ and the fact that $\displaystyle \log(h(x,\theta))$ is twice differentiable wrt $\displaystyle \theta$, for all x, imply that the likelihood is considered regular ?

Are my notations correct ?

2. Oh, I saw in a Wikipedia article that we have

$\displaystyle \mathcal{I}(\theta)=\mathbb{E}\left[\left(\frac{\partial \log h(X,\theta)}{\partial \theta}\right)\right]=-\mathbb{E}\left[\frac{\partial^2 \log h(X,\theta)}{\partial\theta^2}\right]$

The fact that $\displaystyle \mathbb{E}\left(\frac{\partial \log h(X,\theta)}{\partial \theta}\right)=0$ would explain the first equality, but where does the second equality come from ?

Actually, we've been told that if the likelihood is regular, then we can use the latter formula for $\displaystyle \mathcal{I}(\theta)$

I'm pretty lost in all these things... I can apply the formulae, but I have problems understanding the origins...

3. I'm not exactly sure what you're asking.
But it looks like you're missing a square on the Fischer Information...
Fisher information - Wikipedia, the free encyclopedia
The square is inside the expectation.

4. It's....

Originally Posted by Moo
Oh, I saw in a Wikipedia article that we have

$\displaystyle \mathcal{I}(\theta)=\mathbb{E}\left[\left(\frac{\partial \log h(X,\theta)}{\partial \theta}\right)^2\right]=-\mathbb{E}\left[\frac{\partial^2 \log h(X,\theta)}{\partial\theta^2}\right]$

The fact that $\displaystyle \mathbb{E}\left(\frac{\partial \log h(X,\theta)}{\partial \theta}\right)=0$ would explain the first equality, but where does the second equality come from ?

Actually, we've been told that if the likelihood is regular, then we can use the latter formula for $\displaystyle \mathcal{I}(\theta)$

I'm pretty lost in all these things... I can apply the formulae, but I have problems understanding the origins...

5. Hmm, I remember having derived this before. So let me try to make a go at it.
First, if I remember correctly, you need the condition that(holds for any support of h)
$\displaystyle \int\frac{\partial^2}{\partial\theta^2} h(x, \theta)\mu(dx)=\frac{\partial^2}{\partial\theta^2} \int h(x, \theta)\mu(dx)$

Then we note that
$\displaystyle \frac{\partial^2}{\partial\theta^2}\log h(x, \theta) = \left(\frac{\partial^2}{\partial\theta^2} h(x, \theta)/h(x, \theta)\right)-\left(\frac{\partial}{\partial\theta}\log h(x, \theta)\right)^2$

Take expectations on both sides, and we are almost there. Now apply the condition and we have
$\displaystyle \mathbb{E}\left(\frac{\partial^2}{\partial\theta^2 } h(x, \theta)/h(x, \theta)\right)=\int_{\Gamma}\frac{\partial^2}{\par tial\theta^2} h(x, \theta)\mu(dx)=\frac{\partial^2}{\partial\theta^2} \int_{\Gamma} h(x, \theta)\mu(dx)=\frac{\partial^2}{\partial\theta^2} (1)=0$

6. Originally Posted by matheagle
I'm not exactly sure what you're asking.
But it looks like you're missing a square on the Fischer Information...
Fisher information - Wikipedia, the free encyclopedia
The square is inside the expectation.
That was just a typo ^^

Originally Posted by cl85
Hmm, I remember having derived this before. So let me try to make a go at it.
First, if I remember correctly, you need the condition that(holds for any support of h)
$\displaystyle \int\frac{\partial^2}{\partial\theta^2} h(x, \theta)\mu(dx)=\frac{\partial^2}{\partial\theta^2} \int h(x, \theta)\mu(dx)$

Then we note that
$\displaystyle \frac{\partial^2}{\partial\theta^2}\log h(x, \theta) = \left(\frac{\partial^2}{\partial\theta^2} h(x, \theta)/h(x, \theta)\right)-\left(\frac{\partial}{\partial\theta}\log h(x, \theta)\right)^2$
Okay, til here, I understand.

Take expectations on both sides, and we are almost there. Now apply the condition and we have
$\displaystyle \mathbb{E}\left(\frac{\partial^2}{\partial\theta^2 } h(x, \theta)/h(x, \theta)\right)=\int_{\Gamma}\frac{\partial^2}{\par tial\theta^2} h(x, \theta)\mu(dx)=\frac{\partial^2}{\partial\theta^2} \int_{\Gamma} h(x, \theta)\mu(dx)=\frac{\partial^2}{\partial\theta^2} (1)=0$
But I don't understand this one...

And what about $\displaystyle \mathbb{E}\left\{\left(\frac{\partial}{\partial\th eta}\log h(x, \theta)\right)^2\right\}$ ?

Thanks anyway

7. This is correct...

$\displaystyle \frac{\partial^2}{\partial\theta^2}\log h(x, \theta) = \biggl(\frac{\partial^2}{\partial\theta^2} h(x, \theta)\biggr)/h(x, \theta)-\left(\frac{\partial}{\partial\theta}\log h(x, \theta)\right)^2$

When taking expectations, we are allowed to change the order of the partial derivative and the integral.
That makes the second term zero and it's over.

I'm still not sure what you are asking

$\displaystyle {d\over dx} \ln g(x) = {g'(x)\over g(x)}$

So $\displaystyle \biggl({d\over dx} \ln g(x)\biggr)^2 = \biggl({g'(x)\over g(x)}\biggr)^2$

8. Originally Posted by matheagle
This is correct...

$\displaystyle \frac{\partial^2}{\partial\theta^2}\log h(x, \theta) = \biggl(\frac{\partial^2}{\partial\theta^2} h(x, \theta)\biggr)/h(x, \theta)-\left(\frac{\partial}{\partial\theta}\log h(x, \theta)\right)^2$

When taking expectations, we are allowed to change the order of the partial derivative and the integral.
That makes the second term zero and it's over.
What are you talking about ?
When are you reversing the derivative & integral sign ? Why would that "make the second term zero" ???

I'm still not sure what you are asking

$\displaystyle {d\over dx} \ln g(x) = {g'(x)\over g(x)}$

So $\displaystyle \biggl({d\over dx} \ln g(x)\biggr)^2 = \biggl({g'(x)\over g(x)}\biggr)^2$
What is it supposed to show ?
What don't you understand in what I'm asking actually ?

9. Good night.

10. Sorry, I was in a rush earlier. Let me put the brackets properly.

$\displaystyle \frac{\partial^2}{\partial\theta^2}\log h(x, \theta) = \frac{\left(\frac{\partial^2}{\partial\theta^2} h(x, \theta)\right)}{h(x, \theta)}-\left(\frac{\partial}{\partial\theta}\log h(x, \theta)\right)^2$

Rearrange the terms:
$\displaystyle \left(\frac{\partial}{\partial\theta}\log h(x, \theta)\right)^2=\frac{\left(\frac{\partial^2}{\pa rtial\theta^2} h(x, \theta)\right)}{h(x, \theta)}-\frac{\partial^2}{\partial\theta^2}\log h(x, \theta)$
Take expectation on both sides:
$\displaystyle \mathbb{E}\left\{\left(\frac{\partial}{\partial\th eta}\log h(x, \theta)\right)^2\right\}=\mathbb{E}\left\{\frac{\l eft(\frac{\partial^2}{\partial\theta^2} h(x, \theta)\right)}{h(x, \theta)}\right\}-\mathbb{E}\left\{\frac{\partial^2}{\partial\theta^ 2}\log h(x, \theta)\right\}$

And we almost have the desired equality, except that we need the first term on the RHS to be zero.
$\displaystyle \mathbb{E}\left\{\frac{\left(\frac{\partial^2}{\pa rtial\theta^2} h(x, \theta)\right)}{h(x, \theta)}\right\}=\int_{\Gamma}\frac{\left(\frac{\p artial^2}{\partial\theta^2} h(x, \theta)\right)}{h(x, \theta)} h(x, \theta)\mu(dx)$
$\displaystyle =\frac{\partial^2}{\partial\theta^2}\int_{\Gamma} h(x, \theta)\mu(dx)=\frac{\partial^2}{\partial\theta^2} (1)=0$

11. Thank you

Oh, there's still one question (the other one is not important) : is $\displaystyle \hat\theta$ unbiased ?

12. Isn't $\displaystyle \hat\theta$ the maximum likelihood estimator(MLE)? MLEs are not always unbiased but they are asymptotically unbiased.