Maximum error and percentage

Hello,

I'm having some problems getting the same answer as a text book.

It defines (ignoring units) x as 2.32 and y as 0.45 to the nearest 0.1:

$\displaystyle x = 2.32\pm0.005$

$\displaystyle y = 0.45\pm0.005$

and

$\displaystyle T=(x-y)/y$

For my **minimum values** subtract 0.005 from both giving me

3.315 and 0.445.

For my **maximum values** I add 0.005 giving me 2.325 and 0.455.

The part that confuses me is calculating the actual error percentage, what am I dividing by what?

My textbook gives me

$\displaystyle error=((4.22472-4.1556) \div 4.22472)$

...but I don't get where the values are coming from.