Originally Posted by

**chopet** If $\displaystyle x_1$ and $\displaystyle x_2$ are values of a random sample of size 2 from a population having a uniform density with $\displaystyle \alpha = 0$ and $\displaystyle \beta = \theta$, find k such that:

$\displaystyle 0 < \theta < k(x_1+x_2)$

is a $\displaystyle (1- \alpha )100$% confidence interval for $\displaystyle \theta $ when:

(a) $\displaystyle \alpha < 0.5$

(b) $\displaystyle \alpha > 0.5$

This question is confusing...can someone explain what is it asking?