Basically, yes, is "error" (more correctly a "relative error"- the actual error divided by the variable). And thereiserror because we are approximating the, possibly very complex, function, f, by a linear function. We canapproximatethe function f(x) by the linear function f'(a)(x- a)+ f(a). The "error" is the true value, y= f(x), minus that: error(x)= y- [f'(a)(x-a)+ f(a)]= (y- f(a))+ f'(a)(x- a). Now, taking and , that says error(x)= .

Since weknowf(a) we would certainly want our error to be 0 there. And, of course, the further from x= a we are, that is the further we are from the point where we have exact information, we would expect our error to get larger. That is, we would expect our error to be some "relative error" function times . That "relative error" is what they are calling . The fact that the error, . goes to 0 as goes to 0 simply means this is an approximation to f around x= a. The fact that itself goes to 0 means this is thebestpossible linear approximation.

In the rest of what you write, they are approximating a function of two variables by a linear function of two variables, which you can think of as giving a tangentplanerather than a tangent line. The two s are the relative errors in the directions of the coordinate axes. The errors in all other directions can be calculated as a vector sum of those.