Hello,
I have many samples normally distributed with mean=0.
When I take the absolute values of all the samples and calculate the average A, I find that the StdDev is always 1.254A.
Can you explain why?
Thanks,
Shai
Hello,
I have many samples normally distributed with mean=0.
When I take the absolute values of all the samples and calculate the average A, I find that the StdDev is always 1.254A.
Can you explain why?
Thanks,
Shai
Please attend to precision when asking questions of this nature. A "sample" is a set of data collected from a statistical population. A sample consists of elements, typically called observations. You claim to have "many samples normally distributed with mean=0." I am guessing that you meant to say you have many observations sampled from a population that is normally distributed with mean 0? Or does the data in your sample appear normal and the sample mean is zero?
Clear this up and we can proceed.
Thanks for your reply and explanation!
You guess is correct: the population is normally distributed with mean = 0, and I have many observations.
I want to add the constant of 1.254 may be somewhat inaccurate due to low resolution.
Thanks!
Hey, again. Sorry I have been inundated with work. Let's get back to this problem.
Under the null hypothesis of normality, Geary's Ratio, the average deviation from the mean (the mean in our case is 0) divided by the standard deviation of the sample, approaches as . Rigorous proof/justification of Geary's Ratio, also called Geary's Kurtosis, is available on the internet. The reading is pretty thick, though.
We rearrange the terms in Geary's Ratio so that, given a normal population, mean zero, and large , the standard deviation equals the average difference from the mean divided by (roughly) . Note that the reciprocal of 0.7979 is approximately 1.254. So SD is always about 1.254A for big N, mean 0 and a normal population.