How do you figure out sample size given a confidence interval and range of deviation?

Say you want to test a banner advertising campaign. Lets say it has a 2% click through rate and the +- range of deviation from that is 0.25%. What sample size would you need at 90% confidence interval? What about at 95%?

Please explain how to get this done?

Re: How do you figure out sample size given a confidence interval and range of deviat

Hey fstep.

The first thing you need to know is what kind of distribution is used to generate the interval and then you also need to know what the mean of the estimator is and it's standard deviation which will be in terms of your sample size.

Since this is a proportion, if you have a big enough sample size, you use the asymptotic results which mean you can use a Normal distribution with mean 0 and variance 1.

In proprotions using the asymptotic distribution (just think Normal distribution result), we can use our variance to be [p_hat*(1 - p_hat)]/n where p_hat is our 0.02 (our 2% figure) and our interval will correspond to [0.02 - SQRT(var)*a, 0.02 + SQRT(var)*a]

Now the a value depends on the level of confidence: if 95% is used it's roughly 1.96. More confidence means a higher interval, lower confidence means a lower interval. So we can get the value for a using a computer program or statistical tables but now we want to get n from our interval.

So if we have a lower interval point this means the L = 0.02 - SQRT(var)*a and the upper point U = 0.02 + SQRT(var) where var = [0.02(1-0.02)]/n. So we can re-arrange to get a value for n given L or U (either one: doesn't matter when we use asymptotic) and then we can see what the minimum sample size is if we use the asymptotic result.

If another distribution is used with another statistic, then you need to know what is being used and do roughly the same thing where you solve for a particular n.