THEY are the same. Read up on what $\displaystyle \alpha$ is under a composite null hypothesis $\displaystyle H_0:\mu\le 530$.

IT is the sup, which will occur when $\displaystyle \mu=530$

Printable View

- Apr 7th 2009, 10:44 PMmatheagle
THEY are the same. Read up on what $\displaystyle \alpha$ is under a composite null hypothesis $\displaystyle H_0:\mu\le 530$.

IT is the sup, which will occur when $\displaystyle \mu=530$ - Apr 7th 2009, 10:54 PMkingwinner
- Apr 7th 2009, 11:12 PMmatheagle
nope

read about the alpha under the composite hypothesis. - Apr 7th 2009, 11:48 PMkingwinner
By definition, alpha = P(reject Ho|Ho is true)

Is there going to be any difference when Ho is composite?

[by the way, is this discussed in Wackerly? If so, can you please let me know the section?]

Thanks! - Apr 8th 2009, 05:49 AMmatheagle
It's mentioned in english there.

ON page 519 it's mentioned there and another place too.

It the supremum NOT max. READ the paragraph starting with 'So why did....'

$\displaystyle \alpha = \sup_{\theta\in H_0}P({\rm reject H_0}|\rm{H_0 is true}) $

which occurs at the boundary, since this is an increasing or decreasing function.

So this is same as the simple test.

In that example it is a max since it's a closed interval, extreme value theorem.

BUT if the null hypothesis is $\displaystyle \mu>3$ then you look at $\displaystyle \alpha$ when $\displaystyle \mu =3$.

- Apr 8th 2009, 09:21 PMkingwinner
OK, I have read that page and it clears most of my doubts.

So under simple hypothesis, alpha is defined as on p.491 of Wackerly.

But when under__composite__hypothesis, we__DEFINE__alpha by

$\displaystyle

\alpha = \sup_{\theta\in H_0}P({\rm reject H_0}|\rm{H_0 is true})

$, right?? And this is the more general definition of alpha, right? - Apr 8th 2009, 10:25 PMmatheagle
- Apr 9th 2009, 11:05 PMkingwinner
Let Ho: theta=theta_o

On p.511 of Wackerly, there are statements about that:

(i) Reject Ho if and only if theta_o lies outside the 100(1-alpha)% confidence interval for theta.

(ii) Fail to reject Ho if and only if theta_o lies inside the 100(1-alpha)% confidence interval for theta.

Is this relationship between hypothesis testing and confidence intervals__ALWAYS__true?

If so, then we can always use confidence intervals to decide whether to reject Ho or not. For__every__hypothesis testing question, we can answer it simply by computing the confidence intervals. So__WHY__do we need to develop a whole new theory about hypothesis testing (rejection region, p-value, etc.) if the two theories are completely__equivalent__?

Thanks for clearing my doubts!

Note: this is also being discussed in other forum - Apr 11th 2009, 09:32 PMkingwinner
What I was trying to ask is that, if we can draw exactly the SAME conclusions with confidence intervals

__ONLY__, why bother developing a whole new theory in hypothesis testing? We don't need a new rejection region method and p-value method to draw conclusions.

In the example above:

**"A random sample of 26 students who are enrolled in School A was taken and their SAT scores were recorded. The sample mean was 548 with a sample standared derivation s=57. The principal of School A claims that the mean SAT scores of students in her school is higher than the mean SAT scores of all the students in City B which is known to be 530. (City B is the city where School A is located) Does the data support the principal's claim? Assume alpha=0.05."**

To answer this, we used Ho: μ=530 v.s. Ha: μ>530 before. But actually we can completely forget about Ho and Ha, and simply compute a CONFIDENCE INTERVAL and see if it contains 530 to decide whether or not the data support the principal's claim, right?? So we don't at all need the new techniques and theory of Ch.10.

Are the theories of confidence intervals and hypothesis testing completely equivalent in terms of drawing conclusions?

Thanks for answering!