Sorry if I am not being clear, I don't have a very good knowledge of this sort of thing. Here is what I hope a more detailed explanation of what I did and what I want to know:
I used two programs to count certain objects present in digital photographs. The objects had specific characteristics which the programs had to identify to count the objects. For this reason when I ran each of the 10 photographs in the program each program gave me different values. I know that one program is more accurate than the other. I then calculated the percentage error of the values I got. I did this by substracting the value from the "accurate" program from the value of the "innacurate" program. I did this for each of the photos. Then I divided these values by the values of the "accurate" program and multiplied by 100. So this is I believe the percentage error (if I am correct that is the name right?). So now I have a list of 10 "percentage errors" which range from -5% to 6%. So if I have to describe my results, do I say the error obtained from the "innacurate" program was -5% to 6%? Or is there a way to convert that to say +/-4%?