The problem is that there are two entities out there calling themselves

"standard deviation"

which is the square root of the sample variance if is

a sample of size from some population. Or the square root of

the variance of a random variable which takes the values

with equal probability.

which is the square root of the bias corrected estimate of the

population variance that is a sample of size from.

Now which you use is up to you and could depend on what you wish to

do with it. But apparently is now commonly used in software

packages as the definition of standard deviation - with poor justification.

However for large samples these are very close and so the difference does

not matter, but if I were you I would stick to .

RonL