I'd like to ask you folks about sampling, and the relationship between the sample size and the accuracy of a generalization about a population. What I most want to know is, under what circumstances does the accuracy of a generalization depend on the ratio of a sample size to the total population, as opposed to simply the sample size.
Suppose 1,000 squirrels populate a fenced-in park, and you want to determine what percentage of that population has been infected with Yersinia pestis. To estimate this, we have available to us a random sample of squirrels.
Now suppose that 10,000 squirrels populate a second fenced-in park, and you also want to determine what percentage of that population is infected. For this population, we have available to us a random sample of squirrels.
If , what can we say about the comparative accuracy of the tests? How is accuracy even measured?
Also, how large would have to be, with respect to , in order for the accuracy of the two estimates to be about the same?
I hope I've been clear enough about what I'm asking, because to be honest I'm unsure how to articulate it. Anyway, your help would be much appreciated!