Hi there!

I've got a problem in statistics, which isn't my strongest subject.
I have a set of N (about 10^6) data points of the variable r. It is supposed to be quite close to an exponential distribution with the mean r_d.

To investigate if the measured values agree with this exponential distribution, I have made a graph showing log(N(r)/(delta r)), where N(r) is the number of r values on the interval [r, r + delta r). The size of delta r gives around 50 values in this graph. The points are approximately on a straight line, as expected.

But I now want to add error bars to these ~50 data points, and thus I want to know the standard deviation of log(N(r)/(delta r)) for each point. I guess that I need to use propagation of errors, but how do I find the standard deviation of N(r)? There is of course an expression for a 'perfect' N(r), namely N(r) = N*exp(-r/r_d) * (1 - exp(-delta r/r_d)). But this is not the formula for generating my N(r), as this is simply generated by counting r values on some intervals. Is there any intelligent way to find the standard deviation of N(r)?