Page 1 of 2 12 LastLast
Results 1 to 15 of 20

Math Help - better estimator

  1. #1
    MHF Contributor harish21's Avatar
    Joined
    Feb 2010
    From
    Dirty South
    Posts
    1,036
    Thanks
    10

    better estimator

    For a Uniform  U(0, \theta)

    The Maximum Likelyhodd Estimator of \theta =  X_(n)

    Also, the sufficient statistic for \theta is  X_(n)

    and E[X] = \frac{\theta}{2}

    Then the estimators of \theta are found out as

    \hat \theta = 2 \bar X

    and

    \hat \theta = \frac{n}{n+1} \theta

    Which one is the better estimator? What is the process to find it?

    I am wondering if the answer involves calcuating the variance.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Super Member Anonymous1's Avatar
    Joined
    Nov 2009
    From
    Big Red, NY
    Posts
    517
    Thanks
    1
    It does. Calculate the variance for both of them and use the one that is smaller. Note that we usually cannot get rid of the n term in the denominator of the variance, but we can decrease the coefficients and such.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    MHF Contributor harish21's Avatar
    Joined
    Feb 2010
    From
    Dirty South
    Posts
    1,036
    Thanks
    10
    Quote Originally Posted by Anonymous1 View Post
    It does. Calculate the variance for both of them and use the one that is smaller. Note that we usually cannot get rid of the n term in the denominator of the variance, but we can decrease the coefficients and such.
    I am finding it difficult to calculate the variance from \hat \theta . Can you show me some beginning steps of how to start this?
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Super Member Anonymous1's Avatar
    Joined
    Nov 2009
    From
    Big Red, NY
    Posts
    517
    Thanks
    1
    Var(\hat\theta) = 4/n Var(\bar X) = 4/n [E[\bar X^2]- (E[\bar X])^2].

    Now what is E[\bar X]?
    Follow Math Help Forum on Facebook and Google+

  5. #5
    MHF Contributor harish21's Avatar
    Joined
    Feb 2010
    From
    Dirty South
    Posts
    1,036
    Thanks
    10
    Quote Originally Posted by Anonymous1 View Post
    Var(\hat\theta) = 4/n Var(\bar X) = 4/n [E[\bar X^2]- (E[\bar X])^2].

    Now what is E[\bar X]?
    E[X] =\int_0^\theta \frac{x}{\theta}
    = \frac{\theta}{2}

    and ,

    E[X^2] =\int_0^\theta \frac{x^2}{\theta}
    = \frac{\theta^2}{3}

    So,

     Var(\hat\theta) = 4/n [\frac{\theta^2}{3} - \frac{\theta^2}{4}]

    = \frac {\theta^2}{3n}

    is this correct?

    How do I go with the second one?
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Super Member Anonymous1's Avatar
    Joined
    Nov 2009
    From
    Big Red, NY
    Posts
    517
    Thanks
    1
    Just to be clear,

    \bar X = \frac{1}{n} \sum^n x_i
    E[\bar X] = \frac{1}{n} \sum^n E[x_i] = \frac{1}{n} \sum^n\frac{\theta}{2} = \frac{\theta}{2}

    This is why there is an n in your variance.

    So, Var(\hat \theta) = (\frac{n}{n+1})^2 \times Var(\theta).

    Now, what is Var(\theta)?
    Last edited by Anonymous1; March 16th 2010 at 09:35 PM.
    Follow Math Help Forum on Facebook and Google+

  7. #7
    MHF Contributor harish21's Avatar
    Joined
    Feb 2010
    From
    Dirty South
    Posts
    1,036
    Thanks
    10
    Quote Originally Posted by Anonymous1 View Post
    Just to be clear,

    \bar X = \frac{1}{n} \sum^n x_i
    E[\bar X] = \frac{1}{n} \sum^n E[x_i] = \frac{1}{n} \sum^n\frac{\theta}{2} = \frac{\theta}{2}

    This is why there is an n in your variance.

    So, Var(\hat \theta) = (\frac{n}{n+1})^2 \times Var(\theta).

    Now, what is Var(\theta)?
    Thank you. I think it becomes clearer now.
    So,

    It is E[\bar X].

    Likewise,

    E[\bar X^2] = \frac{1}{n^2} \sum^n E[x_i^2] = \frac{1}{n^2} \sum^n\frac{\theta^2}{3} = \frac{\theta^2}{3n}

    ?

    As, you have asked for the second one,

    Var(\theta) = \frac{\theta^2}{12}

    Is it?
    Follow Math Help Forum on Facebook and Google+

  8. #8
    Super Member Anonymous1's Avatar
    Joined
    Nov 2009
    From
    Big Red, NY
    Posts
    517
    Thanks
    1
    Accidental double post. Is there a way to delete these things?
    Follow Math Help Forum on Facebook and Google+

  9. #9
    Super Member Anonymous1's Avatar
    Joined
    Nov 2009
    From
    Big Red, NY
    Posts
    517
    Thanks
    1
    Quote Originally Posted by harish21 View Post
    E[\bar X^2] = \frac{1}{n^2} \sum^n E[x_i^2] = \frac{1}{n^2} \sum^n\frac{\theta^2}{3} = \frac{\theta^2}{3n}
    I think it is important to notice that the second moment brings in the n. This means our sample size is always going to effect our dispersion.

    Now to finish the variance of the second estimator multiply Var(\theta) by all that n stuffs.

    One of your estimators was found by MLE and the other by the method of moments. Which one is better? Why?
    Follow Math Help Forum on Facebook and Google+

  10. #10
    MHF Contributor harish21's Avatar
    Joined
    Feb 2010
    From
    Dirty South
    Posts
    1,036
    Thanks
    10
    Quote Originally Posted by Anonymous1 View Post
    I think it is important to notice that the second moment brings in the n. This means our sample size is always going to effect our dispersion.

    Now to finish the variance of the second estimator multiply Var(\theta) by all that n stuffs.

    One of your estimators was found by MLE and the other by the method of moments. Which one is better? Why?
    For the variance of the second estimator, isnt it sufficient enough to state that:

    Var(\hat \theta) = (\frac{n}{n+1})^2 \times \frac{{\theta}^2}{12}.
    Follow Math Help Forum on Facebook and Google+

  11. #11
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    Two comments

    1 Variance of a constant is zero... V(\theta)=0

    you want V(\hat\theta)

    2 Moreover if \hat\theta is your largest order stat then it's less than \theta

    so to make it unbiased you need to mulitiply by a number bigger than 1, not smaller.

    So you probably want {n+1\over n}X_{(n)}

    vs. the method of moment estimator 2\bar X


    Quote Originally Posted by harish21 View Post
    For a Uniform  U(0, \theta)

    The Maximum Likelyhodd Estimator of \theta =  X_(n)

    Also, the sufficient statistic for \theta is  X_(n)

    and E[X] = \frac{\theta}{2}

    Then the estimators of \theta are found out as

    \hat \theta = 2 \bar X

    and

    \hat \theta = \frac{n}{n+1} \theta

    Which one is the better estimator? What is the process to find it?

    I am wondering if the answer involves calcuating the variance.
    Last edited by matheagle; March 16th 2010 at 11:13 PM.
    Follow Math Help Forum on Facebook and Google+

  12. #12
    MHF Contributor harish21's Avatar
    Joined
    Feb 2010
    From
    Dirty South
    Posts
    1,036
    Thanks
    10
    Quote Originally Posted by matheagle View Post
    Two comments

    1 Variance of a constant is zero... V(\theta)=0

    you want V(\hat\theta)

    2 Moreover if \hat\theta is your largest order stat then it's less than \theta

    so to make unbiased you need to mulitiply by a number bigger than 1, not smaller.

    So you probably want {n+1\over n}X_{(n)}

    vs. the method of moment estimator 2\bar X
    Matheagle,

    How is V(\theta) = 0??
    Follow Math Help Forum on Facebook and Google+

  13. #13
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    Quote Originally Posted by harish21 View Post
    Matheagle,

    How is V(\theta) = 0??
    you're confusing parameters and statistics.
    parameters are unknown CONSTANTs
    statistics are random variables

    \theta is our unknown constant/parameter that we are estimating
    with our stats \bar X and X_{(n)}
    Follow Math Help Forum on Facebook and Google+

  14. #14
    MHF Contributor harish21's Avatar
    Joined
    Feb 2010
    From
    Dirty South
    Posts
    1,036
    Thanks
    10
    Quote Originally Posted by matheagle View Post
    you're confusing parameters and statistics.
    parameters are unknown CONSTANTs
    statistics are random variables

    \theta is our unknown constant/parameter that we are estimating
    with our stats \bar X and X_{(n)}
    So the two estimates, of which, we are trying to find a better one are :

    Var(\hat\theta) = 4/n Var(\bar X) = 4/n [E[\bar X^2]- (E[\bar X])^2].

    vs.

    Var(\hat\theta)={n+1\over n}X_{(n)}

    ??

    are these the ones that are to be compared?
    Follow Math Help Forum on Facebook and Google+

  15. #15
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    I took off the variance.
    BUT in order to determine if that (n+1)/n in front of the largest order stat is correct you will need the density of that order stat.

    Quote Originally Posted by harish21 View Post
    So the two estimates, of which, we are trying to find a better one are :

    Var(\hat\theta) = 4/n Var(\bar X) = 4/n [E[\bar X^2]- (E[\bar X])^2].

    vs.

    \hat\theta={n+1\over n}X_{(n)}

    ??

    are these the ones that are to be compared?
    Follow Math Help Forum on Facebook and Google+

Page 1 of 2 12 LastLast

Similar Math Help Forum Discussions

  1. unbiased estimator
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: November 23rd 2010, 07:36 AM
  2. OLS Estimator
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: April 30th 2010, 05:09 PM
  3. New estimator
    Posted in the Advanced Statistics Forum
    Replies: 4
    Last Post: September 11th 2009, 04:27 AM
  4. Help me about best unbiased estimator, thanks
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: May 3rd 2009, 02:52 PM
  5. Which estimator is better?
    Posted in the Advanced Statistics Forum
    Replies: 3
    Last Post: February 9th 2009, 07:46 PM

Search Tags


/mathhelpforum @mathhelpforum