How would I go about showing that the inverse of a self-adjoint linear transformation on a finite dimensional real vector space is also self-adjoint?
Last edited by laura9010; Jan 8th 2010 at 03:04 PM.
Follow Math Help Forum on Facebook and Google+
Originally Posted by laura9010 How would I go about showing that the inverse of a self-adjoint linear transformation on a finite dimensional real vector space is also self-adjoint? By definition, is the adjoint of if . Now, is the transformation for which , which means , and therefore . The trick is to use this result to prove that . It follows that , which is to say that the inverse of is self-adjoint.
Originally Posted by hatsoff By definition, is the adjoint of if . Now, is the transformation for which , which means , and therefore . The trick is to use this result to prove that . It follows that , which is to say that the inverse of is self-adjoint. Possible alternative: If M is a hermitian matrix of A, then , and so the inverse of B is hermitian, and so the inverse of the linear transformation is self-adjoint. Is that correct?
Originally Posted by laura9010 Possible alternative: If M is a hermitian matrix of A, then , and so the inverse of B is hermitian, and so the inverse of the linear transformation is self-adjoint. Is that correct? I don't see how implies , but then again I'm very rusty with this sort of thing. The missing step from my proof is as follows: .
View Tag Cloud