Consider the following:

An IQ-test has been carried out, and the test result is

*T*. However, since the test result is known to be normal distributed around the true IQ

*Q*, the following reasoning is done:

The IQ for people taking this test is known to be normal distributed around 100. Actually, the IQ is distributed with the following probability function:

$\displaystyle f_{100}(Q) = C_1e^{-A(Q-100)^2}$

where

*A* and $\displaystyle C_1$ are constants. The test result is then known to be normal distributed around the IQ with the following probability function:

$\displaystyle f_Q(T) = C_2e^{-B(T-Q)^2}$

where

*B* and $\displaystyle C_2$ are constants. Taking all of this into account, the probability function, for that a person taking the test has the IQ

*Q* and gets the test result

*T*, is

$\displaystyle f(Q, T) = f_{100}(Q)f_Q(T) = C_1e^{-A(Q-100)^2}C_2e^{-B(T-Q)^2}$

$\displaystyle = C_1C_2 e^{-(A+B)Q^2 + (A\cdot 200 + 2BT)Q -10000A - BT^2}$

$\displaystyle = C_Te^{-(A+B)\left(Q-\frac{A\cdot 100 + BT}{A+B}\right)^2}$

where $\displaystyle C_T$ is a variable depending only on

*T*. What this tells us is basically that if a person has got the test result

*T*, the

most likely value for his IQ, and it's

expected value, is actually:

$\displaystyle \hat Q_{mle} = E(Q) = \frac{A\cdot 100 + BT}{A+B}$

(which is closer to 100 than what

*T* is) for a fixed value of

*T*. However, this method of estimating the IQ from a test result is not

consistent, since a sequence of estimators would

converge in probability to $\displaystyle (A\cdot 100 + BQ)/(A+B)$ and not to

*Q* (note that for real IQ tests,

*B* is most often many times bigger than

*A*).

Now, what I'm wondering is, do you say that this inconsistency is due to a bias? Or what do you say it is caused by? And when tests like these are carried out, what is most often used to estimate the measured parameter, the actual test value or the "adjusted" test value?