I think that the accuracy of the linear approximation is
I have some problems that ask you go verify a linear approximation (easy enough), and then asks you to find the range of values for x which would make it accurate within 0.1. The latter part has me somewhat stumped.
The problem is:
, a=0
which is
and
so
... so that's verified. To get a tolerance of +/- 0.1 you set up the inequality
Here I'm stuck. Can someone point me in the right direction? At this point I'm probably supposed to be doing algebra, but I'm hitting a wall.