This isn't a homework question. I'm a PhD student, and I'm currently working on an area that would be a little lengthy and unnecessary to explain here, but I have summarised in the following analogy. Stats is not my strong point, so I would really appreciate it if someone could point me in the right direction:
Imagine that you had a whole bunch of red balls and green balls. You suspect that the red balls may have a very slightly different diameter than the green balls.
If you measure one of each, you would get a value for the diameter of each colour ball and an associated uncertainty with each measurement. For example:
D(red) = 10.5 +/- 0.5
D(green) = 10.6 +/- 0.5
With these values, you can't really decide whether the balls DO have a different diameter or if they are the same.
Now if you measure more and more balls of each colour, then the uncertainty of the diameter with become smaller and smaller. After say 100 measurements of the diameter of each colour of ball, we might have:
mean D(red) = 10.51 +/- 0.05
mean D(green) = 10.59 +/- 0.05
My question is - how many measurements might we like to take until we can say with some degree of confidence that the red balls do (or don't) have a different diameter to the green balls. Obviously it depends to some extent how far apart the mean of the diameters are. If you had:
mean D(red) = 10.5 +/- 0.5
mean D(green) = 20.6 +/- 0.5
Then you can be pretty sure they have different diameters!
I guess in summary, I want to say if:
mean D(red) = X(R) +/- Y(R)
mean D(green) = X(G) +/- Y(G)
Then with what confidence can I say that the diameters are the same or different.
Any help at all with this would be gratefully received! Thank you in advance.