# need help to construct UMPT (uniformly most powerful test)

Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last
• Feb 21st 2011, 04:34 AM
Volga
need help to construct UMPT (uniformly most powerful test)
Question.

Let $\displaystyle Y_1,... Y_n$ be a random sample from a Poisson distribution with mean $\displaystyle \mu>0$ unknown. Construct the uniformly most powerful test for testing hypotheses

$\displaystyle H_0: \mu=\mu_0; H_1: \mu>\mu_0$

Suppose n=10, sample mean is 2.42, and $\displaystyle \mu_0$=1.5. Will you reject Ho at a significance level of 5%?

I started doing by the book and got stuck. I know it is a trivial problem, statisticians do this all the time, I need to do it once only, to understand )))

Start by writing down the likelihood function
$\displaystyle l_Y(y)=\frac{\mu^{\Sigma_{i=1}^nY_i}(e^{-\mu})^n}{\prod_{i=1}^nY_i!}$

By Neyman-Pearson lemma,

$\displaystyle P_{\mu_0}(\frac{L_Y(\mu;y)}{L_Y(\mu_0;y)}>k_{\alph a})=\alpha$

(my book uses the likelihood over the total sample space in the numerator)

So I find the ratio (I replaced $\displaystyle \Sigma_{i=1}^nY_i$ with $\displaystyle n\bar{Y}$

$\displaystyle \frac{L_Y(\mu;y)}{L_Y(\mu_0;y)}=\frac{\mu^{n{\bar{ Y}}}}{\mu_0^{n{\bar{Y}}}}e^{(\mu_0-\mu)n}$

Now then, I will reject Ho if this ratio is large. This ratio will be larger the larger $\displaystyle \bar{Y}$ is. But how large it should be (for say 95% confidence level?)

Here is where I am stuck. I don't know distribution of this ratio. How do I decide on the critical region? (which will be a most powerful region, I reckon). Do I need to know a distribution?... Do I use the assymptotic distribution of the 2 log of the likelihood ratio? do I use Poisson table? When it comes to a most powerful test, my head is a mess.

thank you !
• Feb 21st 2011, 06:34 AM
Sambit
Well...you do know how large it should be
Quote:

Originally Posted by Volga

By Neyman-Pearson lemma,

$\displaystyle P_{\mu_0}(\frac{L_Y(\mu;y)}{L_Y(\mu_0;y)}>k_{\alph a})=\alpha$

That is, according to Neyman-Pearson's lemma, the ratio you have obtained will be greater than a constant 'K' where P(your ratio > K) =$\displaystyle \alpha$, assuming the null hypothesis to be true. First find the value of this constant 'K'.
• Feb 21st 2011, 01:16 PM
Volga
To find k, I don't have $\displaystyle \mu$. Can I use sample mean $\displaystyle \bar{Y}$ in place of $\displaystyle \mu$? If so, do I have to prove somewhere that the sample mean is sufficient statistic for $\displaystyle \mu$, or that it is MLE for $\displaystyle \mu$?
• Feb 21st 2011, 03:11 PM
theodds
You need to plug in the restricted and full parameter space MLE's into the ratio. Depending on the observed Y, these might be the same thing in which case the ratio will be 1 and you obviously wouldn't reject. If they aren't the same thing then you work with the ratio.

The trick with these is that you usually don't need to know the distribution of the ratio. You can usually write the test as a function of the complete sufficient statistic and (hopefully) the ratio will be a monotone in it (unimodal isn't the end of the world either, but monotone is best); thus you can base the test just on the complete sufficient statistic.
• Feb 22nd 2011, 01:45 AM
Volga
Quote:

Originally Posted by theodds
You need to plug in the restricted and full parameter space MLE's into the ratio. Depending on the observed Y, these might be the same thing in which case the ratio will be 1 and you obviously wouldn't reject. If they aren't the same thing then you work with the ratio.

Right. Let me try this.

MLE (H_1) for Poisson distribution is $\displaystyle \hat{\mu}=\frac{1}{n}\Sigma_{i=1}^nY_i=\bar{Y}, \bar{Y}>\mu_0$
MLE (H_0) is $\displaystyle \mu_0$

then the LR is $\displaystyle r=\frac{\bar{Y}I_{[m_0; \infty]}}{\mu_0}$ and

$\displaystyle r=1, \bar{Y}=\mu_0;$
$\displaystyle r>1, \bar{Y}>\mu_0;$
$\displaystyle r=0, \bar{Y}<\mu_0$

Have I thereby constructed the UMPT? I am concerned that I don't have n anywhere in my formula. Does it matter?

Now, what about the given test results and rejecting/not rejecting of the Ho? $\displaystyle \bar{Y}=2.42>\mu_0=1.5$, so the r>1. What about checking within given confidence level, 5%.

Suppose I use $\displaystyle 2ln(r)\sim\chi^2(1)$?

$\displaystyle 2ln(\frac{2.42*1}{1.5})=0.9566049$ which is between 0.3 and 0.4 in my chi-square table (1 degree of freedom) which is definitely lower than 0.95. Should I reject Ho then?
• Feb 22nd 2011, 02:15 AM
Volga
Quote:

Originally Posted by theodds
The trick with these is that you usually don't need to know the distribution of the ratio. You can usually write the test as a function of the complete sufficient statistic and (hopefully) the ratio will be a monotone in it (unimodal isn't the end of the world either, but monotone is best); thus you can base the test just on the complete sufficient statistic.

the reason I feel I need a distribution because in the second part of the question I am given 5% confidence level, and therefore I think I need to base my decision on a probability distribution.

PS I've just looked up Casella, Berger Statistical Inference. Question 8.31 (page 406) is asking to find UMP for a sample ~ Poisson, and in part (b) it is asking to use Central Limit Theorem to determine the sample size n to achieve certain confidence levels. So, it is also possible to use N(0,1)??? How do I know when it is OK and when it is not?

THANK YOU
• Feb 22nd 2011, 07:16 AM
theodds
Quote:

Originally Posted by Volga
the reason I feel I need a distribution because in the second part of the question I am given 5% confidence level, and therefore I think I need to base my decision on a probability distribution.

PS I've just looked up Casella, Berger Statistical Inference. Question 8.31 (page 406) is asking to find UMP for a sample ~ Poisson, and in part (b) it is asking to use Central Limit Theorem to determine the sample size n to achieve certain confidence levels. So, it is also possible to use N(0,1)??? How do I know when it is OK and when it is not?

THANK YOU

Of course it will be based on a distribution, but the point is that you can base it on the complete sufficient statistic which will have a distribution that you know, whereas likelihood ratios don't necessarily have familiar distributions.

I assumed you hadn't learned about MLR yet since it looks like you are taking a Neyman-Pearson Lemma angle at this problem, but from looking at it in the text you should know about MLR. $\displaystyle T = \sum X_i$ is complete sufficient and is Poisson, so we have MLR in T. Hence, reject when T is big is UMP for this hypothesis. The UMP test should

$\displaystyle \displaystyle \phi(T) = \begin{cases} 1 \qquad &T > K \\ \gamma \qquad &T = K \\ 0 \qquad & T < K \end{cases}$
where $\displaystyle \gamma$ and K are chosen so that $\displaystyle \mbox{E}_{\lambda = \lambda_0} \phi(T) = \alpha$ for the desired alpha; we reject with probability $\displaystyle \phi(T)$. I don't think Casella and Burger get into randomized tests, so maybe you should ignore the gamma part.

The CLT thing is just asking you to use $\displaystyle T \stackrel{\dot}{\sim}N(n\lambda, n\lambda)$ to do a power calculation. It's an approximation, of course. I think for the Poisson, a mean of 20 is probably enough for it to be reasonable (just based on the standard deviation relative to the mean).
• Feb 22nd 2011, 06:09 PM
Volga
Quote:

Originally Posted by theodds
I assumed you hadn't learned about MLR yet since it looks like you are taking a Neyman-Pearson Lemma angle at this problem, but from looking at it in the text you should know about MLR.

by MLR, do you mean M...mum Likelihood Ratio (test)? it is called LRT in Casella/Berger and my study guide. (Maximum or Minimum in the name, I guess, will depend on what you put in the numerator/denominator, and what you are testing)

I (supposedly) have learnt about LRT, and I am struggling to place Neyman-Pearson, LRT, MLE (and sufficient statistics) together into one coherent picture. My study guide is a collection of 'useful' theorems and their proofs, but it has no one description of the 'method' to which these theorems are relevant.
• Feb 22nd 2011, 06:13 PM
theodds
Quote:

Originally Posted by Volga
by MLR, do you mean M...mum Likelihood Ratio (test)? it is called LRT in Casella/Berger and my study guide. I (supposedly) have learnt about LRT, and I am struggling to place Neyman-Pearson, LRT, MLE (and sufficient statistics) together into one coherent picture. My study guide is a collection of 'useful' theorems and their proofs, but it has no one description of the 'method' to which these theorems are relevant. If there is indeed a method.

Monotone Likelihood Ratio. I know Casella and Burger covers it.
• Feb 22nd 2011, 06:21 PM
Volga
Oh, no. My study guide is an undergrad text in Stat Inference and it does not have it in the syllabus (nor in the body of the text). It only refers to Casella and Burger sporadically, and it is much more shallow than Casella and Burger. I wonder if there is a way to solve this exam style question with 'baby' methods (not using Monotone LR).
• Feb 22nd 2011, 06:29 PM
Volga
I don't know if it's helpful to attach the relevant chapter from the study guide, this question is on page 19 and the assumed knowledge is on page 23.

Attachment 20903
• Feb 22nd 2011, 07:04 PM
theodds
Okay, it may be more instructive if I just do it. Also, ignore my initial post in this thread; for some reason I thought we were making a LRT, but everything after that is fine. Consider testing H: lambda = lambda_0 against H': lambda = lambda_1 for lambda_1 > lambda_0. The MP test of this hypothesis is to reject when

$\displaystyle R(X) = \left(\frac {\lambda_1}{\lambda_0}\right) ^ {\sum X_i} e^{-n(\lambda_1 - \lambda_0)} > C_\alpha$

where we choose C to get the desired size (NP Lemma). Now, R(X) is monotonically increasing in $\displaystyle \sum X_i$ due to the fact that $\displaystyle \lambda_1 > \lambda_0$. Equivalently, we reject H when $\displaystyle \sum X_i > k$ where k is chosen to give the desired size. Now, this test does not depend on the choice of lambda_1 because in choosing k we only make use of the distribution of $\displaystyle \sum X_i$ under H. Thus, reject when $\displaystyle \sum X_i > k$ is UMP for the given hypothesis (it is most powerful for all lambda > lambda_0).
• Feb 22nd 2011, 07:16 PM
Volga
Quote:

Originally Posted by theodds
Okay, it may be more instructive if I just do it. Also, ignore my initial post in this thread; for some reason I thought we were making a LRT, but everything after that is fine. Consider testing H: lambda = lambda_0 against H': lambda = lambda_1 for lambda_1 > lambda_0. The MP test of this hypothesis is to reject when

$\displaystyle R(X) = \left(\frac {\lambda_1}{\lambda_0}\right) ^ {\sum X_i} e^{-n(\lambda_1 - \lambda_0)} > C_\alpha$

where we choose C to get the desired size (NP Lemma). Now, R(X) is monotonically increasing in $\displaystyle \sum X_i$ due to the fact that $\displaystyle \lambda_1 > \lambda_0$. Equivalently, we reject H when $\displaystyle \sum X_i > k$ where k is chosen to give the desired size. Now, this test does not depend on the choice of lambda_1 because in choosing k we only make use of the distribution of $\displaystyle \sum X_i$ under H. Thus, reject when $\displaystyle \sum X_i > k$ is UMP for the given hypothesis (it is most powerful for all lambda > lambda_0).

It looks to me that this is roughly what I have done in my first post - up to where I said:

"Now then, I will reject Ho if this ratio is large. This ratio will be larger the larger $\displaystyle \bar{Y}$ is. But how large it should be (for say 95% confidence level?)

Here is where I am stuck."

Since I need to answer part 2 of the question (reject or not), I do need to chose k. And I am back again to the same question, how large the k should be?

Another thing you said, may I ask to elaborate, because it is not obvous to me yet,

Quote:

Now, this test does not depend on the choice of lambda_1 because in choosing k we only make use of the distribution of $\displaystyle \sum X_i$ under H.
(under which H?)

I clearly see $\displaystyle \lambda_1$ in the formula above, why is it that we only make use of the distribution of $\displaystyle \sum X_i$ under H? (and this is where I have difficulty in general with universally most powerful tests.
• Feb 22nd 2011, 07:22 PM
theodds
Quote:

Originally Posted by Volga
It looks to me that this is roughly what I have done in my first post - up to where I said:

"Now then, I will reject Ho if this ratio is large. This ratio will be larger the larger $\displaystyle \bar{Y}$ is. But how large it should be (for say 95% confidence level?)

Here is where I am stuck."

Since I need to answer part 2 of the question (reject or not), I do need to chose k. And I am back again to the same question, how large the k should be?

Another thing you said, may I ask to elaborate, because it is not obvous to me yet,

(under which H?)

I clearly see $\displaystyle \lambda_1$ in the formula above, why is it that we only make use of the distribution of $\displaystyle \sum X_i$ under H? (and this is where I have difficulty in general with universally most powerful tests.

We got rid of all the lambdas by noticing that the decision to reject was only based on how big $\displaystyle \sum X_i$ is. The "new" test becomes to reject when $\displaystyle \sum X_i > k$ for appropriately chosen k; we choose k to get the desired size, which only depends on the distribution of $\displaystyle \sum X_i$ under the null hypothesis, so regardless of the value of $\displaystyle \lambda_1$ we are getting the same k.
• Feb 22nd 2011, 09:37 PM
Volga
Oh I see that now, thanks for taking time to explain!

Would you mind showing how one would approach the second part of the question, viz "Suppose n=10, sample mean is 2.42, and =1.5. Will you reject Ho at a significance level of 5%?" I am still hoping to have this completed as this is a good practice question for the exam. I promise never to attempt constructing a UMPT in real life after I am done with this Stat Inference exam )))
Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last