Hello. I am not sure about the complexity of the question I'm asking but its nothing like what I did in high school so I hope this is the correct forum.
I have sets of 300 integers whose value can range between 330 and 370. The mean comes out to be in the range of 350-360 with a CV in the range of 1%.
Because these values are discovered through destructive testing, a small sample size is preferred. I want to know how to quantify the relationship between the sample size and the expected deviation from the mean/cv of the entire population. Any pointers in the right direction or what additional parameters are required would be great.