Results 1 to 11 of 11

Math Help - Minimum Variance and Sufficiency

  1. #1
    Junior Member
    Joined
    Nov 2007
    Posts
    58

    Minimum Variance and Sufficiency

    Let Y1, Y2, ..., Yn be a random sample of size n from the pdf
    f_Y(y;theta) = (1/((r-1)!*theta^r))*y^(r-1)*e^(-y/theta), y>0, and r>0 is unknown.

    a) Show that theta hat = (1/r)*Ybar is an unbiased estimator for theta.

    b) Show that theta hat =(1/r)*Ybar is a minimum variance estimator for theta.

    c) Find a sufficient statistic for theta.

    Thanks in advance for any help.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Rhymes with Orange Chris L T521's Avatar
    Joined
    May 2008
    From
    Chicago, IL
    Posts
    2,844
    Thanks
    3
    Quote Originally Posted by eigenvector11 View Post
    Let Y1, Y2, ..., Yn be a random sample of size n from the pdf
    f_Y(y;theta) = (1/((r-1)!*theta^r))*y^(r-1)*e^(-y/theta), y>0, and r>0 is unknown.

    a) Show that theta hat = (1/r)*Ybar is an unbiased estimator for theta.
    You must show that E(\hat{\theta})=\theta. You need to note that this is the pdf of the Gamma distribution. And note in general, E(\bar{x})=\mu

    Thus, E(\hat{\theta})=\frac{1}{r}E(\bar{y})=\frac{1}{r}(  r\theta)=\theta.

    Therefore, \frac{1}{r}\bar{y} is unbiased.

    b) Show that theta hat =(1/r)*Ybar is a minimum variance estimator for theta.
    Since we know that \frac{1}{r}\bar{y} is unbiased, all we need to do is show that the MLE is \frac{1}{r}\bar{y}. Then we can claim that its a minimum variance estimator.

    Since f_y\left(y;\theta\right)=\frac{1}{\Gamma(r)\theta^  r}y^{r-1}e^{-\frac{y}{\theta}}, we see that L(\theta)=f_y\left(y_1,y_2,\dots,y_n;\theta\right)  =\prod_{i=1}^{n}\frac{1}{\Gamma(r)\theta^r}y_i^{r-1}e^{-\frac{y_i}{\theta}}=\left(\frac{1}{\Gamma(r)\theta  ^{r}}\right)^n \left(\prod_{i=1}^{n}y_i\right)^{r-1}e^{-\frac{\sum\limits_{i=1}^ny_i}{\theta}}

    In this case, it would be best to maximize \ln\left(L(\theta)\right).

    \ln\left(L(\theta)\right)=-n\ln\left(\Gamma(r)\right)-nr\ln\left(\theta\right)+(r-1)\sum_{i=1}^n\ln(y_i)-\frac{1}{\theta}\sum_{i=1}^{n}y_i

    Therefore, \frac{\partial}{\partial\theta}\left[\ln\left(L(\theta)\right)\right]=-\frac{nr}{\theta}+\frac{1}{\theta^2}\sum_{i=1}^ny_  i

    Now, to maximize, set \frac{\partial}{\partial\theta}\left[\ln\left(L(\theta)\right)\right]=0

    Therefore, -\frac{nr}{\theta}+\frac{1}{\theta^2}\sum_{i=1}^ny_  i=0\implies \frac{nr}{\theta}=\frac{1}{\theta^2}\sum_{i=1}^ny_  i\implies\theta=\frac{1}{r}\left(\frac{1}{n}\sum_{  i=1}^ny_i\right)=\color{red}\boxed{\frac{1}{r}\bar  {y}}

    Since the MLE is equivalent to the unbiased estimator, we see that \frac{1}{r}\bar{y} is the minimum variance estimator.

    c) Find a sufficient statistic for theta.

    Thanks in advance for any help.
    We can use some of the work we did for b) here.

    We said that L(\theta)=\left(\frac{1}{\Gamma(r)\theta^{r}}\righ  t)^n \left(\prod_{i=1}^{n}y_i\right)^{r-1}e^{-\frac{\sum\limits_{i=1}^ny_i}{\theta}}

    Now applying the factorization theorem, we see that our choice for \phi\left(\mathcal{U}(y_1,y_2,\dots,y_n);\theta\ri  ght)=\theta^{-rn}e^{-\frac{\sum\limits_{i=1}^ny_i}{\theta}}. Therefore, our \mathcal{U}(y_1,y_2,\dots,y_n)=\sum_{i=1}^ny_i. Thus, we can claim that \sum_{i=1}^ny_i is sufficient for \theta.

    Does this make sense?

    If you need some additional clarifications, feel free to post back.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    Sufficient and unbiased implies UMVUE.
    Yea, but that's only minimum variance among unbiased estimators.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Junior Member
    Joined
    Nov 2007
    Posts
    58
    Thanks a lot for the help. Matheagle, I'm not sure I follow what you are saying. Do I need to show something else for this question, or is what Chris L T520 explained ok?
    Follow Math Help Forum on Facebook and Google+

  5. #5
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    I don't recall where if a statistic is the MLE of \theta and it's unbiased for \theta, then it has the smallest variance AMONG ALL estimators of \theta. Is that a property of exponential families?

    However if the estimator is unbiased and sufficient for \theta, then it has the smallest variance and of course MSE, BUT ONLY among unbiased estimators. There could be a bias estimator with a smaller variance.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    I agree with (a) and (c), but I don't know how being unbiased and being an MLE makes this estimator min variance for all estimators. If the question was to show that this estimator was UMVUE, then since it's unbiased and suff, then it's UMVUE.
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Junior Member
    Joined
    Nov 2007
    Posts
    58
    An MLE is asymptotically efficient. That means, it achieves the Cramer-Rao lower bound when the sample size tends to infinity. So, if the estimator achieves this lower bound, it must be a minimum variance estimator I think?? I guess the question is, does the sample tend to infinity. The question states a random sample of y1, y2, ..., yn of size n. Does this mean the sample tends to infinity? Or can we not assume that here?
    Follow Math Help Forum on Facebook and Google+

  8. #8
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    I was wondering if you were asked to show it was min variance among just unbiased estimators. I don't recall this property about MLEs. It's been awhile and I was looking over Bickel and Doksun just now.
    Follow Math Help Forum on Facebook and Google+

  9. #9
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    My problem with part b is that it doesn't say the estimators have to be unbiased.
    We can pick a lousy estimator that is biased and yet has a smaller variance. That's why I think they left out unbiased here.
    For example, 1 is a lousy estimator of theta, but it's variance is zero.
    Follow Math Help Forum on Facebook and Google+

  10. #10
    Junior Member
    Joined
    Nov 2007
    Posts
    58
    Yes, matheagle, you were correct. I talked to my prof today about this question. He said that even though the MLE achieves the cramer-rao lower bound, it doesn't mean that there isn't necessarily another estimator that has a smaller variance. He said that I need to show that the variance of this estimator is smaller than all other estimators. I'm not exactly sure how to do this. Any ideas?
    Follow Math Help Forum on Facebook and Google+

  11. #11
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    Quote Originally Posted by eigenvector11 View Post
    Yes, matheagle, you were correct. I talked to my prof today about this question. He said that even though the MLE achieves the cramer-rao lower bound, it doesn't mean that there isn't necessarily another estimator that has a smaller variance. He said that I need to show that the variance of this estimator is smaller than all other estimators. I'm not exactly sure how to do this. Any ideas?
    It still does not make sense to me.
    Chris did an excellent job all around.
    But I don't know how any estimator has a smaller variance than 0.
    (That's rhetorical.)
    I still think your professor needs to include unbiased in part b.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Replies: 3
    Last Post: April 1st 2011, 12:17 AM
  2. the minimum variance of the profit rate
    Posted in the Business Math Forum
    Replies: 4
    Last Post: June 9th 2010, 02:41 AM
  3. minimum variance unbiased estimators
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: January 24th 2010, 08:58 AM
  4. Replies: 1
    Last Post: January 15th 2010, 03:24 AM
  5. Minimum Variance Unbiased Estimator
    Posted in the Advanced Statistics Forum
    Replies: 6
    Last Post: August 24th 2009, 07:38 AM

Search Tags


/mathhelpforum @mathhelpforum