1. ## better estimator

For a Uniform $\displaystyle U(0, \theta)$

The Maximum Likelyhodd Estimator of $\displaystyle \theta$ = $\displaystyle X_(n)$

Also, the sufficient statistic for $\displaystyle \theta$ is $\displaystyle X_(n)$

and E[X] = $\displaystyle \frac{\theta}{2}$

Then the estimators of $\displaystyle \theta$ are found out as

$\displaystyle \hat \theta$ = $\displaystyle 2 \bar X$

and

$\displaystyle \hat \theta = \frac{n}{n+1} \theta$

Which one is the better estimator? What is the process to find it?

I am wondering if the answer involves calcuating the variance.

2. It does. Calculate the variance for both of them and use the one that is smaller. Note that we usually cannot get rid of the $\displaystyle n$ term in the denominator of the variance, but we can decrease the coefficients and such.

3. Originally Posted by Anonymous1
It does. Calculate the variance for both of them and use the one that is smaller. Note that we usually cannot get rid of the $\displaystyle n$ term in the denominator of the variance, but we can decrease the coefficients and such.
I am finding it difficult to calculate the variance from $\displaystyle \hat \theta$ . Can you show me some beginning steps of how to start this?

4. $\displaystyle Var(\hat\theta) = 4/n Var(\bar X) = 4/n [E[\bar X^2]- (E[\bar X])^2].$

Now what is $\displaystyle E[\bar X]?$

5. Originally Posted by Anonymous1
$\displaystyle Var(\hat\theta) = 4/n Var(\bar X) = 4/n [E[\bar X^2]- (E[\bar X])^2].$

Now what is $\displaystyle E[\bar X]?$
$\displaystyle E[X] =\int_0^\theta \frac{x}{\theta}$
=$\displaystyle \frac{\theta}{2}$

and ,

$\displaystyle E[X^2] =\int_0^\theta \frac{x^2}{\theta}$
= $\displaystyle \frac{\theta^2}{3}$

So,

$\displaystyle Var(\hat\theta) = 4/n [\frac{\theta^2}{3} - \frac{\theta^2}{4}]$

= $\displaystyle \frac {\theta^2}{3n}$

is this correct?

How do I go with the second one?

6. Just to be clear,

$\displaystyle \bar X = \frac{1}{n} \sum^n x_i$
$\displaystyle E[\bar X] = \frac{1}{n} \sum^n E[x_i] = \frac{1}{n} \sum^n\frac{\theta}{2} = \frac{\theta}{2}$

This is why there is an $\displaystyle n$ in your variance.

So, $\displaystyle Var(\hat \theta) = (\frac{n}{n+1})^2 \times Var(\theta).$

Now, what is $\displaystyle Var(\theta)?$

7. Originally Posted by Anonymous1
Just to be clear,

$\displaystyle \bar X = \frac{1}{n} \sum^n x_i$
$\displaystyle E[\bar X] = \frac{1}{n} \sum^n E[x_i] = \frac{1}{n} \sum^n\frac{\theta}{2} = \frac{\theta}{2}$

This is why there is an $\displaystyle n$ in your variance.

So, $\displaystyle Var(\hat \theta) = (\frac{n}{n+1})^2 \times Var(\theta).$

Now, what is $\displaystyle Var(\theta)?$
Thank you. I think it becomes clearer now.
So,

It is $\displaystyle E[\bar X]$.

Likewise,

$\displaystyle E[\bar X^2] = \frac{1}{n^2} \sum^n E[x_i^2] = \frac{1}{n^2} \sum^n\frac{\theta^2}{3} = \frac{\theta^2}{3n}$

?

As, you have asked for the second one,

$\displaystyle Var(\theta) = \frac{\theta^2}{12}$

Is it?

8. Accidental double post. Is there a way to delete these things?

9. Originally Posted by harish21
$\displaystyle E[\bar X^2] = \frac{1}{n^2} \sum^n E[x_i^2] = \frac{1}{n^2} \sum^n\frac{\theta^2}{3} = \frac{\theta^2}{3n}$
I think it is important to notice that the second moment brings in the $\displaystyle n.$ This means our sample size is always going to effect our dispersion.

Now to finish the variance of the second estimator multiply $\displaystyle Var(\theta)$ by all that $\displaystyle n$ stuffs.

One of your estimators was found by MLE and the other by the method of moments. Which one is better? Why?

10. Originally Posted by Anonymous1
I think it is important to notice that the second moment brings in the $\displaystyle n.$ This means our sample size is always going to effect our dispersion.

Now to finish the variance of the second estimator multiply $\displaystyle Var(\theta)$ by all that $\displaystyle n$ stuffs.

One of your estimators was found by MLE and the other by the method of moments. Which one is better? Why?
For the variance of the second estimator, isnt it sufficient enough to state that:

$\displaystyle Var(\hat \theta) = (\frac{n}{n+1})^2 \times \frac{{\theta}^2}{12}.$

1 Variance of a constant is zero...$\displaystyle V(\theta)=0$

you want $\displaystyle V(\hat\theta)$

2 Moreover if $\displaystyle \hat\theta$ is your largest order stat then it's less than $\displaystyle \theta$

so to make it unbiased you need to mulitiply by a number bigger than 1, not smaller.

So you probably want $\displaystyle {n+1\over n}X_{(n)}$

vs. the method of moment estimator $\displaystyle 2\bar X$

Originally Posted by harish21
For a Uniform $\displaystyle U(0, \theta)$

The Maximum Likelyhodd Estimator of $\displaystyle \theta$ = $\displaystyle X_(n)$

Also, the sufficient statistic for $\displaystyle \theta$ is $\displaystyle X_(n)$

and E[X] = $\displaystyle \frac{\theta}{2}$

Then the estimators of $\displaystyle \theta$ are found out as

$\displaystyle \hat \theta$ = $\displaystyle 2 \bar X$

and

$\displaystyle \hat \theta = \frac{n}{n+1} \theta$

Which one is the better estimator? What is the process to find it?

I am wondering if the answer involves calcuating the variance.

12. Originally Posted by matheagle

1 Variance of a constant is zero...$\displaystyle V(\theta)=0$

you want $\displaystyle V(\hat\theta)$

2 Moreover if $\displaystyle \hat\theta$ is your largest order stat then it's less than $\displaystyle \theta$

so to make unbiased you need to mulitiply by a number bigger than 1, not smaller.

So you probably want $\displaystyle {n+1\over n}X_{(n)}$

vs. the method of moment estimator $\displaystyle 2\bar X$
Matheagle,

How is $\displaystyle V(\theta) = 0??$

13. Originally Posted by harish21
Matheagle,

How is $\displaystyle V(\theta) = 0??$
you're confusing parameters and statistics.
parameters are unknown CONSTANTs
statistics are random variables

$\displaystyle \theta$ is our unknown constant/parameter that we are estimating
with our stats $\displaystyle \bar X$ and $\displaystyle X_{(n)}$

14. Originally Posted by matheagle
you're confusing parameters and statistics.
parameters are unknown CONSTANTs
statistics are random variables

$\displaystyle \theta$ is our unknown constant/parameter that we are estimating
with our stats $\displaystyle \bar X$ and $\displaystyle X_{(n)}$
So the two estimates, of which, we are trying to find a better one are :

$\displaystyle Var(\hat\theta) = 4/n Var(\bar X) = 4/n [E[\bar X^2]- (E[\bar X])^2].$

vs.

$\displaystyle Var(\hat\theta)={n+1\over n}X_{(n)}$

??

are these the ones that are to be compared?

15. I took off the variance.
BUT in order to determine if that (n+1)/n in front of the largest order stat is correct you will need the density of that order stat.

Originally Posted by harish21
So the two estimates, of which, we are trying to find a better one are :

$\displaystyle Var(\hat\theta) = 4/n Var(\bar X) = 4/n [E[\bar X^2]- (E[\bar X])^2].$

vs.

$\displaystyle \hat\theta={n+1\over n}X_{(n)}$

??

are these the ones that are to be compared?

Page 1 of 2 12 Last