Consider the following:
An IQ-test has been carried out, and the test result is T. However, since the test result is known to be normal distributed around the true IQ Q, the following reasoning is done:
The IQ for people taking this test is known to be normal distributed around 100. Actually, the IQ is distributed with the following probability function:
where A and are constants. The test result is then known to be normal distributed around the IQ with the following probability function:
where B and are constants. Taking all of this into account, the probability function, for that a person taking the test has the IQ Q and gets the test result T, is
where is a variable depending only on T. What this tells us is basically that if a person has got the test result T, the most likely value for his IQ, and it's expected value, is actually:
(which is closer to 100 than what T is) for a fixed value of T. However, this method of estimating the IQ from a test result is not consistent, since a sequence of estimators would converge in probability to and not to Q (note that for real IQ tests, B is most often many times bigger than A).
Now, what I'm wondering is, do you say that this inconsistency is due to a bias? Or what do you say it is caused by? And when tests like these are carried out, what is most often used to estimate the measured parameter, the actual test value or the "adjusted" test value?