Originally Posted by

**danc_ie** Greetings all,

I am testing a modification to an algorithm to see if significant

performance gains can be observed when compared with the

original algorithm.

I have performed 30 trials of each on a benchmark problem and I

have two sets of performance results to compare (higher values

being better).

Using a 95% confidence interval, I can observe (graphically) that

there is no overlap between the error-bars, the modified method

shows mean performance window "above" the original method.

Using a paired t-test, I obtain a P-value less than 0.05.

My question is -- are these two methods functionally equivalent?

In a paper I'm writing I am using the confidence intervals to claim

a statistically significant performance improvement.

Is this sufficient or am I potentially leaving myself open to criticism

by more experienced statisticians than myself?

Any comments would be most welcome. Many thanks,

dan