1. ## Likelihood Ratio Test

1) In Neyman-Pearson lemma, for a fixed alpha, it gives the test with the highest power (most powerful test).
How about the "Likelihood Ratio Test"? (which is seemingly a generalization of the N-P lemma) What does it give? Does it also give the test with the highest power? Given any alpha, there are an infinite number of decision rules. What's so special about the decision rule given by the Likelihood Ratio Test?

2) There is a theorem related to the likelihood ratio test:

Now I don't understand...What is the meaning of "free parameters" as stated in the theorem?

Thank you for explaining!

Note: This topic is also being discussed in the SOS mathematics cyberboard.

2. That's page 553 of Wackerly. I gave that lecture 2 weeks ago.
I'm doing Chapter 11, Regression, now. Doesn't your professor answer these questions?

3. But I can't really find the answers to my problems in Wackerly. And no, my instructor didn't answer these in lecture (unfortunately).

2) In fact, for 2), my instructor gave a different version of the theorem:

Is this CORRECT?

4. OK, I think I have found the answer to 1) on p.553 of Wackerly, although it's not a very certain statement: it says "for most practical problems, the likelihood ratio method produces the best possible test, in terms of power".
So I would interpret it as: in most cases, the likelihood ratio test produces the test with the highest power for each fixed alpha.

But I am still puzzled by 2)...