# matrix diagonalisation - real and complex matrices - some observations

Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last
• January 9th 2011, 05:42 AM
Volga
matrix diagonalisation - real and complex matrices - some observations
I am revising Linear Algebra and trying to put together a rule how to decide whether a matrix is diagonaliseable. (a short and simple rule that can be referred to immediately when solving a problem)

This is what I came up based on my experience with solving questions and comparing to model answers.

Real matrices.

Only symmetric matrices are orthogonally diagonaliseable - by any orthogonal matrix whose column vectors are eigenvectors of the original (symmetric) matrix.

No need for Gram-Schmidt orthonormalisation process here as it is not required for this vectors to be orthonormal.

Complex matrices.

Only normal matrices are unitarily diagonaliseable - by any unitary matrix whose column vectors are eigenvectors of A of the original (normal) matrix.

To unitarily diagonalise a normal matrix, need to find orthonormal basis (ie apply Gram-Schmidt process if necessary and have normalised vectors in the basis)

Is this correct?
• January 9th 2011, 06:17 AM
FernandoRevilla
Quote:

Originally Posted by Volga
... by any orthogonal matrix whose column vectors are eigenvectors of the original (symmetric) matrix.

And the the eigenvectors form an orthonormal basis.

Quote:

No need for Gram-Schmidt orthonormalisation process here as it is not required for this vectors to be orthonormal.

If an eigenspace $V$ has dimension greater than 2, two vectors of $V$ not necessarily are orthogonal, so you'll need Gram-Schmidt or another tool.

Quote:

Only normal matrices are unitarily diagonaliseable - by any unitary matrix whose column vectors are eigenvectors of A of the original (normal) matrix.

And the the eigenvectors form an orthonormal basis.

Fernando Revilla
• January 9th 2011, 07:56 AM
HallsofIvy
Quote:

Originally Posted by FernandoRevilla
And the the eigenvectors form an orthonormal basis.

If an eigenspace $V$ has dimension greater than 2, two vectors of $V$ not necessarily are orthogonal, so you'll need Gram-Schmidt or another tool.

For symmetric real matrices (self-adjoint) eigenvectors corresponding to distinct eigenvalues are necessarilly orthogonal so you only need to worry about orthogonality for eigenvectors corresponding to the same eigenvalue.

Quote:

And the the eigenvectors form an orthonormal basis.

Fernando Revilla
• January 9th 2011, 08:15 AM
FernandoRevilla
Quote:

Originally Posted by HallsofIvy
For symmetric real matrices (self-adjoint) eigenvectors corresponding to distinct eigenvalues are necessarilly orthogonal so you only need to worry about orthogonality for eigenvectors corresponding to the same eigenvalue.

Right, for that reason I only commented about $\dim V\geq 2$ .

Fernando Revilla
• January 9th 2011, 01:55 PM
Volga
Right. I understand that eigen vectors (corresponding to the distinct eigenvales) are always pairwise orthogonal but are they always orthonormal (ie each one has a lengh of 1)??

I am asking because sometimes I see the answer includes normalisation step, sometimes it does not (again, assuming they are orthogonal already).
• January 9th 2011, 04:04 PM
FernandoRevilla
Quote:

Originally Posted by Volga
... but are they always orthonormal (ie each one has a lengh of 1)??

Not necessarily.

Quote:

I am asking because sometimes I see the answer includes normalisation step, sometimes it does not (again, assuming they are orthogonal already).

Normalisation step is necessary. Take into account that

$U\in\mathbb{K}^{n\times n}\;\;(\;\mathbb{K}=\mathbb{C}\textrm{\;or\;}\math bb{R}\;)$

is unitary if and only if its columns form an orthonormal system with the usual inner product.

Fernando Revilla
• January 9th 2011, 04:25 PM
Volga
I understand that using normalised vectors is a must in unitarily diagonalisation. But it is used with complex matrices, right?

But, say, I have real symmetric matrix I need to diagonalise. I find eigenvalues and they are all distinct -> eigenvectors are pairwise orthogonal. Do I still need to normalise them or I can skip it? What exactly does normalisation add in this situation?
• January 9th 2011, 09:33 PM
FernandoRevilla
Quote:

Originally Posted by Volga
I understand that using normalised vectors is a must in unitarily diagonalisation. But it is used with complex matrices, right?

Right, but also for real matrices if you want unitary diagonalization. Bear in mind that $U\in\mathbb{R}^{n\times n}$ is unitary iff is orthogonal. So, in $\mathbb{R}$, to say unitary diagonalization is equivalent to say orthogonal diagonalization.

Quote:

But, say, I have real symmetric matrix I need to diagonalise. I find eigenvalues and they are all distinct -> eigenvectors are pairwise orthogonal. Do I still need to normalise them or I can skip it? What exactly does normalisation add in this situation?
Normalization adds that the corresponding transition matrix $P\in\mathbb{R}^{n\times n}$ is orthogonal, i.e. $P^{t}=P^{-1}$ and as a consequence $P^{-1}AP=P^tAP=D$ (diagonal). By the way, this transition matrix allows us to diagonalize two things: (i) The endomorphism determined by $A$ . (ii) The quadratic form determined by $A$ .

Fernando Revilla
• January 9th 2011, 10:35 PM
Volga
Quote:

Originally Posted by FernandoRevilla
Normalization adds that the corresponding transition matrix $P\in\mathbb{R}^{n\times n}$ is orthogonal, i.e. $P^{t}=P^{-1}$ and as a consequence $P^{-1}AP=P^tAP=D$ (diagonal). By the way, this transition matrix allows us to diagonalize two things: (i) The endomorphism determined by $A$ . (ii) The quadratic form determined by $A$ .

Got it! the orthonormal diagonalisation (ie with normalisation) seems like a 'better' diagonalisation since it allows us to achieve more with orthogonal transition matrix P)))

Since you mentioned diagonalisation of quadratic, a follow up question. I read somewhere (I cannot find it any more, but I made a note in my book and it kills me now) that, when I am trying to get rid of cross-terms in quadratic by diagonalisation, I need to make sure that my transition matrix should have determinant equal 1, so that the transformation is merely change of coordinates. I've been pondering on this statement the other day.

Now I can link your statement (ii): this orthonormal transition matrix P allows us to diagonalise quadratic to the fact that the determinant of my transition matrix (for quadratic diagonalisation) should be 1. Would these two statements be connected? Sorry for a stupid question. This is my second month in the world of linear algebra.
• January 10th 2011, 12:29 AM
FernandoRevilla
Quote:

Originally Posted by Volga
Got it! the orthonormal diagonalisation (ie with normalisation) seems like a 'better' diagonalisation since it allows us to achieve more with orthogonal transition matrix P)))

Right.

Quote:

Since you mentioned diagonalisation of quadratic, a follow up question. I read somewhere (I cannot find it any more, but I made a note in my book and it kills me now) that, when I am trying to get rid of cross-terms in quadratic by diagonalisation, I need to make sure that my transition matrix should have determinant equal 1, so that the transformation is merely change of coordinates. I've been pondering on this statement the other day.
That is irrelevant. As $P$ is orthogonal, $\det (P)=\pm 1$. Another thing is that (for example) if you are finding the canonical form of a conic, if you choose $\det (P)=1$ then $P$ represents a rotation and keeps the axes orientation

Quote:

Now I can link your statement (ii): this orthonormal transition matrix P allows us to diagonalise quadratic to the fact that the determinant of my transition matrix (for quadratic diagonalisation) should be 1. Would these two statements be connected? Sorry for a stupid question. This is my second month in the world of linear algebra.
If I have understood it correctly, I think my above comment solve your question.

Fernando Revilla
• January 10th 2011, 05:34 AM
Volga
Oh, I think it is all coming together - to make a 'rotation' one has to use orthonormal transition matrix - because linear transformation by an orthonormal matrix preserves the inner product which ensures that the shape of the curves of my form remains unchanged. And the orthonormal matrix always has determinant =/-1 (which is easy to prove using the property of determinant of the transpose matrix).
• January 10th 2011, 08:19 AM
FernandoRevilla
Quote:

Originally Posted by Volga
Oh, I think it is all coming together - to make a 'rotation' one has to use orthonormal transition matrix - because linear transformation by an orthonormal matrix preserves the inner product which ensures that the shape of the curves of my form remains unchanged.

The usual name for a matrix $P\in \matthbb{R}^{n\times n}$ such that $P^t=P^{-1}$ is orthogonal matrix, not orthonormal matrix. Another thing is that $P$ is orthogonal iff its columns form an orthonormal system with the usual inner product on $\mathbb{R}$. Certainly, the transformation $y=Px$ preserves lenght and the usual inner product.

Quote:

And the orthonormal matrix always has determinant =/-1 (which is easy to prove using the property of determinant of the transpose matrix).

Not true:

$P^{-1}=P\Rightarrow P^tP=I\Rightarrow|P^tP|=|P^t||P|=|P||P|=|P|^2=1\Ri ghtarrow|P|=\pm 1$

For example

$P_1=\begin{bmatrix}{\cos \alpha }&{-\sin \alpha }\\{\sin \alpha }&{\;\;\;\cos \alpha }\end{bmatrix},\;\;P_2=\begin{bmatrix}{\cos \alpha }&{\;\;\;\sin \alpha }\\{\sin \alpha }&{\;\;\;-\cos \alpha }\end{bmatrix}$

are orthogonal matrices, $|P_1|=1$ (rotation, preserves orientation) and $|P_2|=-1$ (symmetry, does not preserve orientation).

Fernando Revilla
• January 10th 2011, 06:45 PM
Volga
Right. I will not forget the definition of the orthogonal matrix now.
I made a typo in |P|, I meant it to be $\pm1$.
So, to make a rotation, |P| should be 1 (and not -1).
• January 11th 2011, 07:18 AM
Ackbeet
Quote:

Only symmetric matrices are orthogonally diagonalizable...
Not true, actually, even in the real case. Normal matrices (in the real case, this just means that the matrix commutes with its transpose) are also orthogonally diagonalizable.
• January 11th 2011, 08:19 AM
FernandoRevilla
Quote:

Originally Posted by Ackbeet
Not true, actually, even in the real case. Normal matrices (in the real case, this just means that the matrix commutes with its transpose) are also orthogonally diagonalizable.

It is true.

If $A\in\mathbb{R}^{n\times n}$ is orthogonally diagonalizable, there exists $P\in\mathbb{R}^{n\times n}$ orthogonal such that $A=P^tDP$ with $D\in\mathbb{R}^{n\times n}$ diagonal, so:

$A^t=(P^tDP)^t=P^tD^tP=P^tDP=A$

i.e. $A$ is symmetric.

Fernando Revilla
Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last