Page 1 of 2 12 LastLast
Results 1 to 15 of 23

Math Help - matrix diagonalisation - real and complex matrices - some observations

  1. #1
    Senior Member
    Joined
    Nov 2010
    From
    Hong Kong
    Posts
    255

    matrix diagonalisation - real and complex matrices - some observations

    I am revising Linear Algebra and trying to put together a rule how to decide whether a matrix is diagonaliseable. (a short and simple rule that can be referred to immediately when solving a problem)

    This is what I came up based on my experience with solving questions and comparing to model answers.

    Real matrices.

    Only symmetric matrices are orthogonally diagonaliseable - by any orthogonal matrix whose column vectors are eigenvectors of the original (symmetric) matrix.

    No need for Gram-Schmidt orthonormalisation process here as it is not required for this vectors to be orthonormal.

    Complex matrices.


    Only normal matrices are unitarily diagonaliseable - by any unitary matrix whose column vectors are eigenvectors of A of the original (normal) matrix.

    To unitarily diagonalise a normal matrix, need to find orthonormal basis (ie apply Gram-Schmidt process if necessary and have normalised vectors in the basis)

    Is this correct?
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor FernandoRevilla's Avatar
    Joined
    Nov 2010
    From
    Madrid, Spain
    Posts
    2,162
    Thanks
    45
    Quote Originally Posted by Volga View Post
    ... by any orthogonal matrix whose column vectors are eigenvectors of the original (symmetric) matrix.

    And the the eigenvectors form an orthonormal basis.

    No need for Gram-Schmidt orthonormalisation process here as it is not required for this vectors to be orthonormal.

    If an eigenspace V has dimension greater than 2, two vectors of V not necessarily are orthogonal, so you'll need Gram-Schmidt or another tool.


    Only normal matrices are unitarily diagonaliseable - by any unitary matrix whose column vectors are eigenvectors of A of the original (normal) matrix.

    And the the eigenvectors form an orthonormal basis.


    Fernando Revilla
    Follow Math Help Forum on Facebook and Google+

  3. #3
    MHF Contributor

    Joined
    Apr 2005
    Posts
    15,792
    Thanks
    1532
    Quote Originally Posted by FernandoRevilla View Post
    And the the eigenvectors form an orthonormal basis.




    If an eigenspace V has dimension greater than 2, two vectors of V not necessarily are orthogonal, so you'll need Gram-Schmidt or another tool.
    For symmetric real matrices (self-adjoint) eigenvectors corresponding to distinct eigenvalues are necessarilly orthogonal so you only need to worry about orthogonality for eigenvectors corresponding to the same eigenvalue.




    And the the eigenvectors form an orthonormal basis.


    Fernando Revilla
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor FernandoRevilla's Avatar
    Joined
    Nov 2010
    From
    Madrid, Spain
    Posts
    2,162
    Thanks
    45
    Quote Originally Posted by HallsofIvy View Post
    For symmetric real matrices (self-adjoint) eigenvectors corresponding to distinct eigenvalues are necessarilly orthogonal so you only need to worry about orthogonality for eigenvectors corresponding to the same eigenvalue.
    Right, for that reason I only commented about \dim V\geq 2 .


    Fernando Revilla
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Senior Member
    Joined
    Nov 2010
    From
    Hong Kong
    Posts
    255
    Right. I understand that eigen vectors (corresponding to the distinct eigenvales) are always pairwise orthogonal but are they always orthonormal (ie each one has a lengh of 1)??

    I am asking because sometimes I see the answer includes normalisation step, sometimes it does not (again, assuming they are orthogonal already).
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor FernandoRevilla's Avatar
    Joined
    Nov 2010
    From
    Madrid, Spain
    Posts
    2,162
    Thanks
    45
    Quote Originally Posted by Volga View Post
    ... but are they always orthonormal (ie each one has a lengh of 1)??

    Not necessarily.


    I am asking because sometimes I see the answer includes normalisation step, sometimes it does not (again, assuming they are orthogonal already).

    Normalisation step is necessary. Take into account that

    " alt="U\in\mathbb{K}^{n\times n}\;\;(\;\mathbb{K}=\mathbb{C}\textrm{\;or\;}\math bb{R}\" />

    is unitary if and only if its columns form an orthonormal system with the usual inner product.


    Fernando Revilla
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Senior Member
    Joined
    Nov 2010
    From
    Hong Kong
    Posts
    255
    I understand that using normalised vectors is a must in unitarily diagonalisation. But it is used with complex matrices, right?

    But, say, I have real symmetric matrix I need to diagonalise. I find eigenvalues and they are all distinct -> eigenvectors are pairwise orthogonal. Do I still need to normalise them or I can skip it? What exactly does normalisation add in this situation?
    Follow Math Help Forum on Facebook and Google+

  8. #8
    MHF Contributor FernandoRevilla's Avatar
    Joined
    Nov 2010
    From
    Madrid, Spain
    Posts
    2,162
    Thanks
    45
    Quote Originally Posted by Volga View Post
    I understand that using normalised vectors is a must in unitarily diagonalisation. But it is used with complex matrices, right?

    Right, but also for real matrices if you want unitary diagonalization. Bear in mind that U\in\mathbb{R}^{n\times n} is unitary iff is orthogonal. So, in \mathbb{R}, to say unitary diagonalization is equivalent to say orthogonal diagonalization.

    But, say, I have real symmetric matrix I need to diagonalise. I find eigenvalues and they are all distinct -> eigenvectors are pairwise orthogonal. Do I still need to normalise them or I can skip it? What exactly does normalisation add in this situation?
    Normalization adds that the corresponding transition matrix P\in\mathbb{R}^{n\times n} is orthogonal, i.e. P^{t}=P^{-1} and as a consequence P^{-1}AP=P^tAP=D (diagonal). By the way, this transition matrix allows us to diagonalize two things: (i) The endomorphism determined by A . (ii) The quadratic form determined by A .


    Fernando Revilla
    Follow Math Help Forum on Facebook and Google+

  9. #9
    Senior Member
    Joined
    Nov 2010
    From
    Hong Kong
    Posts
    255
    Quote Originally Posted by FernandoRevilla View Post
    Normalization adds that the corresponding transition matrix P\in\mathbb{R}^{n\times n} is orthogonal, i.e. P^{t}=P^{-1} and as a consequence P^{-1}AP=P^tAP=D (diagonal). By the way, this transition matrix allows us to diagonalize two things: (i) The endomorphism determined by A . (ii) The quadratic form determined by A .
    Got it! the orthonormal diagonalisation (ie with normalisation) seems like a 'better' diagonalisation since it allows us to achieve more with orthogonal transition matrix P)))

    Since you mentioned diagonalisation of quadratic, a follow up question. I read somewhere (I cannot find it any more, but I made a note in my book and it kills me now) that, when I am trying to get rid of cross-terms in quadratic by diagonalisation, I need to make sure that my transition matrix should have determinant equal 1, so that the transformation is merely change of coordinates. I've been pondering on this statement the other day.

    Now I can link your statement (ii): this orthonormal transition matrix P allows us to diagonalise quadratic to the fact that the determinant of my transition matrix (for quadratic diagonalisation) should be 1. Would these two statements be connected? Sorry for a stupid question. This is my second month in the world of linear algebra.
    Follow Math Help Forum on Facebook and Google+

  10. #10
    MHF Contributor FernandoRevilla's Avatar
    Joined
    Nov 2010
    From
    Madrid, Spain
    Posts
    2,162
    Thanks
    45
    Quote Originally Posted by Volga View Post
    Got it! the orthonormal diagonalisation (ie with normalisation) seems like a 'better' diagonalisation since it allows us to achieve more with orthogonal transition matrix P)))
    Right.

    Since you mentioned diagonalisation of quadratic, a follow up question. I read somewhere (I cannot find it any more, but I made a note in my book and it kills me now) that, when I am trying to get rid of cross-terms in quadratic by diagonalisation, I need to make sure that my transition matrix should have determinant equal 1, so that the transformation is merely change of coordinates. I've been pondering on this statement the other day.
    That is irrelevant. As P is orthogonal, \det (P)=\pm 1. Another thing is that (for example) if you are finding the canonical form of a conic, if you choose \det (P)=1 then P represents a rotation and keeps the axes orientation

    Now I can link your statement (ii): this orthonormal transition matrix P allows us to diagonalise quadratic to the fact that the determinant of my transition matrix (for quadratic diagonalisation) should be 1. Would these two statements be connected? Sorry for a stupid question. This is my second month in the world of linear algebra.
    If I have understood it correctly, I think my above comment solve your question.


    Fernando Revilla
    Follow Math Help Forum on Facebook and Google+

  11. #11
    Senior Member
    Joined
    Nov 2010
    From
    Hong Kong
    Posts
    255
    Oh, I think it is all coming together - to make a 'rotation' one has to use orthonormal transition matrix - because linear transformation by an orthonormal matrix preserves the inner product which ensures that the shape of the curves of my form remains unchanged. And the orthonormal matrix always has determinant =/-1 (which is easy to prove using the property of determinant of the transpose matrix).
    Follow Math Help Forum on Facebook and Google+

  12. #12
    MHF Contributor FernandoRevilla's Avatar
    Joined
    Nov 2010
    From
    Madrid, Spain
    Posts
    2,162
    Thanks
    45
    Quote Originally Posted by Volga View Post
    Oh, I think it is all coming together - to make a 'rotation' one has to use orthonormal transition matrix - because linear transformation by an orthonormal matrix preserves the inner product which ensures that the shape of the curves of my form remains unchanged.
    The usual name for a matrix P\in \matthbb{R}^{n\times n} such that P^t=P^{-1} is orthogonal matrix, not orthonormal matrix. Another thing is that P is orthogonal iff its columns form an orthonormal system with the usual inner product on \mathbb{R}. Certainly, the transformation y=Px preserves lenght and the usual inner product.


    And the orthonormal matrix always has determinant =/-1 (which is easy to prove using the property of determinant of the transpose matrix).

    Not true:

    P^{-1}=P\Rightarrow P^tP=I\Rightarrow|P^tP|=|P^t||P|=|P||P|=|P|^2=1\Ri  ghtarrow|P|=\pm 1

    For example

    P_1=\begin{bmatrix}{\cos \alpha }&{-\sin \alpha }\\{\sin \alpha  }&{\;\;\;\cos \alpha }\end{bmatrix},\;\;P_2=\begin{bmatrix}{\cos \alpha }&{\;\;\;\sin \alpha }\\{\sin \alpha  }&{\;\;\;-\cos \alpha }\end{bmatrix}

    are orthogonal matrices, |P_1|=1 (rotation, preserves orientation) and |P_2|=-1 (symmetry, does not preserve orientation).


    Fernando Revilla
    Follow Math Help Forum on Facebook and Google+

  13. #13
    Senior Member
    Joined
    Nov 2010
    From
    Hong Kong
    Posts
    255
    Right. I will not forget the definition of the orthogonal matrix now.
    I made a typo in |P|, I meant it to be \pm1.
    So, to make a rotation, |P| should be 1 (and not -1).
    Follow Math Help Forum on Facebook and Google+

  14. #14
    A Plied Mathematician
    Joined
    Jun 2010
    From
    CT, USA
    Posts
    6,318
    Thanks
    4
    Awards
    2
    Only symmetric matrices are orthogonally diagonalizable...
    Not true, actually, even in the real case. Normal matrices (in the real case, this just means that the matrix commutes with its transpose) are also orthogonally diagonalizable.
    Follow Math Help Forum on Facebook and Google+

  15. #15
    MHF Contributor FernandoRevilla's Avatar
    Joined
    Nov 2010
    From
    Madrid, Spain
    Posts
    2,162
    Thanks
    45
    Quote Originally Posted by Ackbeet View Post
    Not true, actually, even in the real case. Normal matrices (in the real case, this just means that the matrix commutes with its transpose) are also orthogonally diagonalizable.

    It is true.

    If A\in\mathbb{R}^{n\times n} is orthogonally diagonalizable, there exists P\in\mathbb{R}^{n\times n} orthogonal such that A=P^tDP with D\in\mathbb{R}^{n\times n} diagonal, so:

    A^t=(P^tDP)^t=P^tD^tP=P^tDP=A

    i.e. A is symmetric.


    Fernando Revilla
    Follow Math Help Forum on Facebook and Google+

Page 1 of 2 12 LastLast

Similar Math Help Forum Discussions

  1. Replies: 3
    Last Post: March 17th 2011, 05:34 AM
  2. Diagonally dominant matrices, real-life example
    Posted in the Advanced Algebra Forum
    Replies: 2
    Last Post: September 21st 2010, 12:28 AM
  3. Properties of real symmetric matrices
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: August 7th 2010, 03:27 AM
  4. Positive Real Matrices
    Posted in the Advanced Algebra Forum
    Replies: 2
    Last Post: June 13th 2009, 11:30 AM
  5. Diagonalisation of a matrix
    Posted in the Advanced Algebra Forum
    Replies: 6
    Last Post: December 25th 2007, 06:52 AM

Search Tags


/mathhelpforum @mathhelpforum