Hello,

I'm having some problems getting the same answer as a text book.

It defines (ignoring units) x as 2.32 and y as 0.45 to the nearest 0.1:

and

For myminimum valuessubtract 0.005 from both giving me

3.315 and 0.445.

For mymaximum valuesI add 0.005 giving me 2.325 and 0.455.

The part that confuses me is calculating the actual error percentage, what am I dividing by what?

My textbook gives me

...but I don't get where the values are coming from.