# Math Help - Little exercise about eigenvalues

1. ## Little exercise about eigenvalues

Hi,

I'm doing a course that often uses linear algebra, and it implies at a certain point that, given two symmetric real matrices A and B, both positive (semi)definite, then AB has only real and non-negative eigenvalues.

I proved it but it's too long, it's gotta be more obvious.

Here's my proof.

Unless AB=0 matrix, then AB surely has at least one non-zero complex conjugate pair of eigenvalues and complex conjugate pair of eigenvectors.

* is for conjugate, ' is for transpose.

So let c be a complex non-zero eigenvalue, and the corresponding complex eigenvector v:

ABv=cv

Let z=Bv, z obviously is nonzero, then Az=cv and Az*=c*v*

z*'Az is real and positive because A is positive semidefinite.

z*'Az=cz*'v=(Az*)'v=c*v*'z

z*'v=v*'Bv, and this is real and positive too because B is positive semidefinite.

Therefore c is real and positive too.

2. Here's a proof that looks short and snappy. But it has to use a certain amount of machinery. I don't think you'll find a really elementary proof of this result – it certainly isn't obvious.

Fact 1. For any two nxn matrices P and Q, PQ and QP always have the same spectrum (spectrum = set of eigenvalues). There's a short proof of that here.

Fact 2. A positive semidefinite matrix $A$ has a positive semidefinite square root $A^{1/2}$. Proof: A can be diagonalised. The square root is obtained by forming the diagonal matrix whose diagonal entries are the square roots of those in the diagonal matrix obtained from A.

If A and B are positive semidefinite then the spectrum of $AB=A^{1/2}(A^{1/2}B)$ is the same as the spectrum of $A^{1/2}BA^{1/2$ (by Fact 1). But $A^{1/2}BA^{1/2$ is positive semidefinite, so its spectrum is real and non-negative.

3. Thanks!

Thank you also for giving me this very useful result I didn't know (that PQ and QP have the same eigenvalues).

4. Just another little question: so if the spectrum of PQ is the same of the spectrum of QP (including algebraic and geometric multiplicity), does your proof imply also that AB can be diagonalized, since A^(1/2)BA^(1/2) can?

This would make it even more powerful

5. Originally Posted by rargh
Just another little question: so if the spectrum of PQ is the same of the spectrum of QP (including algebraic and geometric multiplicity)

That is not true, choose:

$P=\begin{bmatrix}{1}&{0}\\{0}&{0}\end{bmatrix}\qua d Q=\begin{bmatrix}{0}&{1}\\{0}&{0}\end{bmatrix}$

Fernando Revilla

6. Ok, but in this counterexample by professor Revilla, PQ=Q and it has only one eigenvalue, 0, with algebraic multiplicity 2. QP=0, which is diagonal and has 0 eigenvalue with algebraic multiplicity 2. Of course, in PQ the geometric multiplicity of 0 is 1, while in QP is 0.

But I'd like to point out that if PQ has at least one non-zero eigenvalue, then:

let c be a nonzero eigenvalue of PQ and v a corresponding eigenvector.

Then, as in Opalg's reference, we see that:

PQv=cv, and by assumption Qv is not 0.

Then Q(PQ)v=cQv=(QP)(Qv),

therefore Qv is an eigenvector of QP with eigenvalue c.

Similarly, if w is an eigenvector of QP with non-zero eigenvalue, then, Pw is an eigenvector for PQ with the same eigenvalue.

Let v1,...vi,...vn be a basis of the eigenspace E(c,PQ)

Suppose Qv1, Qvi, ...., Qvn are linearly dependent, e.g. with z=sum (a_iv_i) for some scalar a_i s, then it implies Qz=0, but z is still an eigenvector with eigenvalue c, therefore it would be absurd, so we can state that:

dim[E(c,QP)]>=dim[E(c,PQ)]

We can similarly state, applying P to the eigenspaces of QP, that:

dim[E(c,PQ)]>=dim[E(c,QP)]

therefore dim[E(c,PQ)]=dim[E(c,QP)]

This doesn't apply if c=0, in fact in the counterexample dim[Ker(PQ)]=1, dim[Ker(QP)]=2

Now, using this result, if we know that:

if PQ is diagonalizable, and rank(PQ)=rank(QP) (equivalently, dim[Ker(PQ)]=dim[Ker(QP)]), then QP is diagonalizable too.

7. It is true that the eigenspaces of the nonzero eigenvalues have the same dimension in PQ as in QP. The trouble in general is that the null spaces of PQ and QP (the eigenspaces corresponding to the zero eigenvalue) need not have the same dimension, as FernandoRevilla's example shows.

Originally Posted by rargh
if PQ is diagonalizable and it has at least one nonzero eigenvalue, then also QP is diagonalizable
Unfortunately that is not true either, as you can see by taking FernandoRevilla's example and tacking on a nonzero eigenvalue:

$P = \begin{bmatrix}1&0&0\\ 0&0&0\\ 0&0&1\end{bmatrix}, \quad Q = \begin{bmatrix}0&1&0\\ 0&0&0\\ 0&0&1\end{bmatrix}.$

However, it does look as though in the particular case where $P = AB$ and $Q = A^{1/2}BA^{1/2}$, it may be true that the two matrices have the same rank. At any rate, I can't find any counterexample to it and I have a feeling that it's true. If so, then it does follow that $AB$ is diagonalisable.

8. Ok, thank you for all the useful corrections, I think I finally got some better specifications.

So we have seen that the necessary and sufficient condition for QP to be diagonalizable when we already know that PQ is, is that dim[Ker(PQ)]=dim[Ker(QP)] (which we hadn't in all the counterexamples), or equivalently, rank(PQ)=rank(QP).

Proof: If PQ is diagonalizable, and V indicates the whole finite dimensional vector space on which it operates, then, indicating with c_i the distinct non zero eigenvalues and Ker(PQ) the null space of PQ,

sum[dim(E(c_i,PQ))]+[dim(Ker(PQ)]=dim(V)

This equation is the necessary and sufficient condition for PQ to be diagonalizable.

Since we know the eigenspaces for non-zero eigenvalues have the same dimension for PQ and QP, and dim[(ker(PQ)]=dim[ker(QP)], the equation is satisfied also for QP therefore it is diagonalizable.

(the assumption of equality of dimensions of null spaces is independent from the assumption that PQ is diagonalizable).

Now, in our special case, let's check if they have the same rank.

PQ=AB

QP=A^(1/2)BA^(1/2)

We can see that, since (AB)'=BA, rank(AB)=rank(BA).

Then, we know that: rank(BA)=rank(BA^(1/2)), since Ker(A^(1/2))=Ker(A), and Image(A)=Image(A^(1/2))

Now, we have to check if: rank(A^(1/2)BA^(1/2))=rank(BA^(1/2)).

Suppose BA^(1/2)v=w, where w is non zero, but, A^(1/2)BA^(1/2)v=0

Remembering that A and B are positive semidefinite, would happen if and only if BA^(1/2)v=0, which is against the assumption. Therefore rank(A^(1/2)BA^(1/2))=rank(BA^(1/2))=...=rank(AB).

I hope I finally got it right! Thank you for all your assistance and patience

9. Yes, that looks good to me. The crucial point is that Ker(A^(1/2))=Ker(A) for a positive semidefinite matrix A.