Originally Posted by

**mongrel73** Someone is making four measurements:

$\displaystyle y_1$, $\displaystyle y_2$ of $\displaystyle \lambda$, $\displaystyle \mu$ respectively, $\displaystyle y_3$ and $\displaystyle y_4$ both of $\displaystyle \lambda + \mu$.

The measurements are subjected to independent, normally distributed random errors with known variance $\displaystyle {\sigma}^2$.

He suspects that there is a constant bias $\displaystyle \beta$

How do you show that this bias is estimated as $\displaystyle y_1+y_2-\tfrac{1}{2}(y_3+y_4)$ ?