Its actually ok - I've spotted the problem.
I should have been getting the standard deviation from the mean of every simulation run whereas I was getting the average of the standard deviations from every simulation run.
Hi,
I am running simulations to test an algorithm. According to the algorithm the maximum hop count incurred in the algorithm should be O(log N) and the typical or average hop count incurred should be 1/2(log N) where N is the number of nodes. So for a 50 node network I would expect the maximum hop count to be 5.643856 and the average hop count to be 2.821928.
So I ran 5 50 node simulations with different seeds. Each simulation had a data set of 467 values. I got the mean and standard deviation for each simulation listed below. I then got the batch mean (mean of means) and the average standard deviation across the 5 simulations (also see below). I calculated a 95% confidence interval with a sample size of 5 (for the 5 simulations). My problem is that my confidence interval is 1.94627 - 4.007078. Now my understanding of the CI is that it shows with my 95% confidence what your average value should be so I would have expected it to be around 2.8 - 3 maybe. I can get that CI (2.87 - 3.08) but only if I set the sample size to 467 (the size of an individual data set from one simulation run) but surely my sample size should be 5 because this is based on 5 simulation runs?
sim 1: mean: 2.993521, stdev: 1.196116
sim 2: mean: 2.974082, stdev: 1.173009
sim 3: mean: 2.967603, stdev: 1.162653
sim 4: mean: 2.974082, stdev: 1.173009
sim 5: mean: 2.974082, stdev: 1.173009
batch means: 2.976674
average stdev: 1.175559
Many thanks in advance