Originally Posted by

**Trido** Hello,

I have come across some computer code that is attempting (what looks like) a Pearson's Chi-Square Test for normality.

The program has good knowledge of the mean and variance of the quantity being tested (it maintains it over time through some fancy converging filter), however, rather than performing the statistic using expectations, it simply substitutes the estimates and computes the value for each incoming measurement.

That is, measurements are processed one by one, and if it doesn't match the good mean and variance estimate, then it chucks it away.

The statistic is programmed as:

$\displaystyle Chi^2 = (Measurement - EstimatedMean)^2/EstimatedVariance$

And the significance test is performed using one degree of freedom.

My question is, how is this derived from the basic Pearson's Chi-Square formula where expectations are defined? And is this a good test for normality?

Thankyou.