Could you please take a look if my answer makes sense- I don't have a solution to this practice question. Also, I am not quite sure how to read the data in part (c).
Let be a random sample from an exponential distribution with density function
where is an unknown parameter.
(a) Obtain the likelihood ratio test statistic for testing against .
(b) State the assymptotic distribution of twice the logarithm of the likelihood ratio test statistic under Ho.
(c) If , will you reject the null hypothesis Ho at a significance level of 5%?
the MLE for is (I checked against a few sources).
Therefore my likelihood ratio
(b) the assymptotic distribution
(coverges in distribution)
(c) calculate r from the given test results.
Now I am not sure what is "Y" - if that is then why the MLE realisation ( ) is so far away from ?
Ideally I would use (b) to test the hypothesis.
Does the part (c) data make sense to you?
February 27th 2011, 05:41 AM
Part (c) is a little unclear. They could mean or which will lead to different answers obviously. There's nothing wrong with the MLE being far away from the null value; that's what we want, since we are testing hypotheses.
It might not be worth the trouble, but you can do better in part (a). You don't need to use asymptotics to solve this problem.
February 27th 2011, 05:08 PM
Well, if you don't mind can you show me how you'd approach (a)? I am always open to improvements!
Let me finish (c) assuming 31 is the sample mean.
or thereabouts, ie some ridiculously high number.
I assume the double of this is wa-ay outside the tails of chi square distribution, so reject the null.
February 27th 2011, 05:10 PM
Originally Posted by theodds
There's nothing wrong with the MLE being far away from the null value; that's what we want, since we are testing hypotheses.
Well I guess the real world tests may have all kinds of results, but I wonder why bother run a test with x% confidence level if it is obvious to the naked eye that the null does not make sense?
There's nothing unusual about largely significant tests. It happens in practice all the time. Just because it is "obvious" that some parameters are (say) not equal to 0 that doesn't excuse the statistician from testing it typically.
To do better on (a), it turns out that you can write the test as reject when or for appropriately chosen . You need and you also need to chosen so that if or if then you get the same value of the LR test statistic (the second restriction is correct up-to a stupid mistake on my part). You should think along the lines of the inverse image of the LR and think in terms of what values you you are rejecting. You have to solve these equations numerically I think.