This ought to be an easy question but I am struggling. I have a table of data (let's say x and y) calculated to high precision. I have another table of data calculated differently using lower precision maths (actualy using an 8-bit processor). Significant rounding errors are present in the latter as might be expected. x is always the independent variable.
To compare the two data sets I take the differences of each dependent variable and plot these in Excel against the independent variable. The errors (differences) appear in the 5th, 6th+ d.p. etc.
If the errors in the second calculation are truly random the mean of the differences should be 0. Sometimes the mean is close to zero, sometimes there is a definite +/- bias. Occasionally there is a trend in the differences of the form f(x) = mx + c yielding an intercept and a gradient.
What I need to know is how to represent the results. If the first calculation is the "standard" then the second (lower precision) method needs to be quoted to some degree of accuracy compared to the first. At the moment I have been content with the mean of the differences (which should always be zero but is seldom so) and the correlation coeffiecient of those differences (which should also be zero, showing no relation whatsoever) but seldom is.
This is not about reducing those differences but about how best to quote the accuracy with which one set of data matches a "standard". Any suggestions?