Tensor Calculus: total differential

I am having a problem understand a particular result in tensor calculus. It is quite a simple result. I shall set the scene.

It is possible to define a double contraction of two second order tensors , which is given by

Now we come onto the idea of taking the gradient of a scalar-valued function of a second order tensor. Consider to be a scalar-valued function of second order tensors, . It is a non-linear function, and we aim to approximate at by a linear function. Taylor expanding at apparently yields

where is given by the double contraction:

Now I have tried and tried to understand how this follows from some form of Taylor expansion, but I'm not getting anywhere. Can someone point me in the right direction?

Thanks,

Lind

Re: Tensor Calculus: total differential

The Taylor's expansion of a function, f(x), around is terms of order [tex]x^2[/itex] and higher. In particular, if we write that becomes higher order terms in dA.

That looks like just what you have.

Re: Tensor Calculus: total differential

HallsofIvy, thank you for your reply.

Thanks for clearing up the step about Taylor series expansion. My main question is, why does this term end up being the double contraction given. I don't understand how these two things are related.

Thanks,

Lind