I'm sorry, I should have checked before on the textbooks, this result is already known since ages. It's just a special case where the variable using in the derivation is the whole matrix.
Hi,
my multivariate statistics course strikes again with unproven linear algebra results!
This time, there's a calculation in which I have to calculate the derivatives of the inverse of a matrix and of the determinant of the matrix, by varying the matrix itself.
Luckily, in my course we are working only with strictly positive definite symmetric matrices.
I would like to ask if the following results are correct.
Result 1:
Proof:
It can be easily proven that the derivative of a matrix product AB follows the same rule of the derivative of the product of two scalars (by calculating the limit of the product variation and neglecting infinitesimals of superior order), of course, if we assume that A and B are both differentiable.
Now, if our matrix variable is constrained to be always invertible, we know that:
Where 0 is a null square matrix. Solving the equation gives the preceding result. Notice that the first result requires the variable to be always invertible, not necessarily symmetric or positive definite.
Result 2: (this one requires the matrix to be symmetric and invertible)
Proof:
Let be the diagonal form of Sigma
Where R is also variable depending on Sigma, but constrained to:
Deriving the constraint we get:
Combining the results and remembering that the trace is linear and commutes, we finally get:
We assumed to vary Sigma on a set constrained to be always symmetric and invertible.
Actually, we can get the same result by using similar calculations on any Sigma constrained to be invertible, even though not diagonalizable, by using the triangular form (which is always possible if we extend to complex variables).