I have written a function in C that generates a rational which is the best approximation to an input real value. For this I used continued fraction convergents and semi-convergents. Both the numerator and denominator have maximum values. I also have a test harness that increments a real value from some start value to an end value using some increment value. For each rational calculated I then convert this back to another real value and compare this with the original value to determine its error.
My question is what is the worst case error that I could expect given maximum values for both the numerator and denominator given that the range of real values (to be converted to the best rational approxmation) is from zero to the maximum numerator value.