# Math Help - Minimum Variance and Sufficiency

1. ## Minimum Variance and Sufficiency

Let Y1, Y2, ..., Yn be a random sample of size n from the pdf
f_Y(y;theta) = (1/((r-1)!*theta^r))*y^(r-1)*e^(-y/theta), y>0, and r>0 is unknown.

a) Show that theta hat = (1/r)*Ybar is an unbiased estimator for theta.

b) Show that theta hat =(1/r)*Ybar is a minimum variance estimator for theta.

c) Find a sufficient statistic for theta.

Thanks in advance for any help.

2. Originally Posted by eigenvector11
Let Y1, Y2, ..., Yn be a random sample of size n from the pdf
f_Y(y;theta) = (1/((r-1)!*theta^r))*y^(r-1)*e^(-y/theta), y>0, and r>0 is unknown.

a) Show that theta hat = (1/r)*Ybar is an unbiased estimator for theta.
You must show that $E(\hat{\theta})=\theta$. You need to note that this is the pdf of the Gamma distribution. And note in general, $E(\bar{x})=\mu$

Thus, $E(\hat{\theta})=\frac{1}{r}E(\bar{y})=\frac{1}{r}( r\theta)=\theta$.

Therefore, $\frac{1}{r}\bar{y}$ is unbiased.

b) Show that theta hat =(1/r)*Ybar is a minimum variance estimator for theta.
Since we know that $\frac{1}{r}\bar{y}$ is unbiased, all we need to do is show that the MLE is $\frac{1}{r}\bar{y}$. Then we can claim that its a minimum variance estimator.

Since $f_y\left(y;\theta\right)=\frac{1}{\Gamma(r)\theta^ r}y^{r-1}e^{-\frac{y}{\theta}}$, we see that $L(\theta)=f_y\left(y_1,y_2,\dots,y_n;\theta\right) =\prod_{i=1}^{n}\frac{1}{\Gamma(r)\theta^r}y_i^{r-1}e^{-\frac{y_i}{\theta}}=\left(\frac{1}{\Gamma(r)\theta ^{r}}\right)^n \left(\prod_{i=1}^{n}y_i\right)^{r-1}e^{-\frac{\sum\limits_{i=1}^ny_i}{\theta}}$

In this case, it would be best to maximize $\ln\left(L(\theta)\right)$.

$\ln\left(L(\theta)\right)=-n\ln\left(\Gamma(r)\right)-nr\ln\left(\theta\right)+(r-1)\sum_{i=1}^n\ln(y_i)-\frac{1}{\theta}\sum_{i=1}^{n}y_i$

Therefore, $\frac{\partial}{\partial\theta}\left[\ln\left(L(\theta)\right)\right]=-\frac{nr}{\theta}+\frac{1}{\theta^2}\sum_{i=1}^ny_ i$

Now, to maximize, set $\frac{\partial}{\partial\theta}\left[\ln\left(L(\theta)\right)\right]=0$

Therefore, $-\frac{nr}{\theta}+\frac{1}{\theta^2}\sum_{i=1}^ny_ i=0\implies \frac{nr}{\theta}=\frac{1}{\theta^2}\sum_{i=1}^ny_ i\implies\theta=\frac{1}{r}\left(\frac{1}{n}\sum_{ i=1}^ny_i\right)=\color{red}\boxed{\frac{1}{r}\bar {y}}$

Since the MLE is equivalent to the unbiased estimator, we see that $\frac{1}{r}\bar{y}$ is the minimum variance estimator.

c) Find a sufficient statistic for theta.

Thanks in advance for any help.
We can use some of the work we did for b) here.

We said that $L(\theta)=\left(\frac{1}{\Gamma(r)\theta^{r}}\righ t)^n \left(\prod_{i=1}^{n}y_i\right)^{r-1}e^{-\frac{\sum\limits_{i=1}^ny_i}{\theta}}$

Now applying the factorization theorem, we see that our choice for $\phi\left(\mathcal{U}(y_1,y_2,\dots,y_n);\theta\ri ght)=\theta^{-rn}e^{-\frac{\sum\limits_{i=1}^ny_i}{\theta}}$. Therefore, our $\mathcal{U}(y_1,y_2,\dots,y_n)=\sum_{i=1}^ny_i$. Thus, we can claim that $\sum_{i=1}^ny_i$ is sufficient for $\theta$.

Does this make sense?

If you need some additional clarifications, feel free to post back.

3. Sufficient and unbiased implies UMVUE.
Yea, but that's only minimum variance among unbiased estimators.

4. Thanks a lot for the help. Matheagle, I'm not sure I follow what you are saying. Do I need to show something else for this question, or is what Chris L T520 explained ok?

5. I don't recall where if a statistic is the MLE of $\theta$ and it's unbiased for $\theta$, then it has the smallest variance AMONG ALL estimators of $\theta$. Is that a property of exponential families?

However if the estimator is unbiased and sufficient for $\theta$, then it has the smallest variance and of course MSE, BUT ONLY among unbiased estimators. There could be a bias estimator with a smaller variance.

6. I agree with (a) and (c), but I don't know how being unbiased and being an MLE makes this estimator min variance for all estimators. If the question was to show that this estimator was UMVUE, then since it's unbiased and suff, then it's UMVUE.

7. An MLE is asymptotically efficient. That means, it achieves the Cramer-Rao lower bound when the sample size tends to infinity. So, if the estimator achieves this lower bound, it must be a minimum variance estimator I think?? I guess the question is, does the sample tend to infinity. The question states a random sample of y1, y2, ..., yn of size n. Does this mean the sample tends to infinity? Or can we not assume that here?

8. I was wondering if you were asked to show it was min variance among just unbiased estimators. I don't recall this property about MLEs. It's been awhile and I was looking over Bickel and Doksun just now.

9. My problem with part b is that it doesn't say the estimators have to be unbiased.
We can pick a lousy estimator that is biased and yet has a smaller variance. That's why I think they left out unbiased here.
For example, 1 is a lousy estimator of theta, but it's variance is zero.

10. Yes, matheagle, you were correct. I talked to my prof today about this question. He said that even though the MLE achieves the cramer-rao lower bound, it doesn't mean that there isn't necessarily another estimator that has a smaller variance. He said that I need to show that the variance of this estimator is smaller than all other estimators. I'm not exactly sure how to do this. Any ideas?

11. Originally Posted by eigenvector11
Yes, matheagle, you were correct. I talked to my prof today about this question. He said that even though the MLE achieves the cramer-rao lower bound, it doesn't mean that there isn't necessarily another estimator that has a smaller variance. He said that I need to show that the variance of this estimator is smaller than all other estimators. I'm not exactly sure how to do this. Any ideas?
It still does not make sense to me.
Chris did an excellent job all around.
But I don't know how any estimator has a smaller variance than 0.
(That's rhetorical.)
I still think your professor needs to include unbiased in part b.