so if you set up a table and crank through the arithmetic you should find
I am performing a small experiment and I have some problems with calculating the standard deviation.
My experiment consist of an amount of measurements (N) and the result of one measurement can either be 1 or 0.
Now for instance, N=100 and the result is 1 in 50% of all measurements.
The standard deviation then:
u=0.5 (mean)
1/100*(50*(1-u)^2+50*(0-u)^2)= (50/4+50/4)/100 = 1/4.
But I want to decrease my SD. So I need to increase N. However, this doesnot have any effect. When N = 500.
1/500*(250/4+250/4)=1/4.
Obviously I am doing something wrong, can someone tell me how to calculate the SD in the correct way?
Small correction: What you've been calculating is the variance not the standard deviation. Standard deviation is the square root of the variance.
Are you saying that 1/2 the measurements are 1 and the other half are 0, regardless of sample size? Why then should the variance of the sample change?
What exactly are you trying to do with this sample?
dof,
I think are confusing the standard deviation of the sample and the standard deviation of the mean.
The standard deviation of the sample is an estimate of the spread in the distribution. Getting more data doesn't usually make it smaller, you just get a better estimate of the "true" standard deviation.
The standard deviation of the mean is related to the standard deviation of the distribution by
.
It decreases with larger sample sizes.