I don't know if this is basic enough to be in the pre-university section, but I guess it can be moved if not.
Imagine you are watching a pendulum and taking a note of the time each time the pendulum reaches the right hand side. There will be an uncertainty in the measurement of this time: Delta[t]
Following the rules of uncertainty propagation, this means that the uncertainty in measuring the period of a swing (P = t2 - t1) will be sqrt x Delta[t].
Now if I imagine that the period P is constant, I can lower the uncertainty in P (Delta[P]) by measuring the time over say 100 periods. Delta[P] will then = sqrt x Delta[t] / 100 right?
My question is this:
In the real word, the period P will not be constant, and will increase on each swing as the pendulum loses energy. Is it still okay to use the method above? Isn't it only okay to divide by N when you are measuring the same thing each time? I can't justify in my own mind, that it is still okay to divide by N, even when P may have different values each time you measure it.
My immediate thought, is that if you were measuring things of very different lengths, then you cannot divide by the number of measurements N to reduce your uncertainty on any one object. But if we go back to the pendulum, the change in period will be very small and so perhaps it is still reasonable to divide by N in this case?
I hope this makes sense!
Any help on this would be great!