I have what is probably a simple question:

I'm doing a pilot study to estimate the error of measurement in some of our laboratory equipment. At one point in the dynamic range we have a set of measurements of a standard solution, where an analysis of 109 data points give an average u = 30,23 and and a standard deviation s = 0,296.

What I need to know is what separation (D) of two non-related single samples (eg two blood tests taken at different times) is needed to claim a significant difference at the 95% level, given that the accuracy is the same as when measuring the standard solution.

1) Is the Z-test applicable, handling the single samples as averages and assigning the standard deviation from the standard solution measurements, in which case the separation need to be D = sqrt(2) x 1,96 x s? Or it as simple as the minimum separation needs to be D= 1,96 x s?

2) When doing a normal distribution plot, it seems I have an slightly uneven distribution with too many high numbers. Does anyone have a suggestion for an adapted distribution that does not require heavy computation?

Many thanks in advance, and sorry for my non-scientific notation in the formulas.