# Math Help - Similar matrices

1. ## Similar matrices

Are the following pairs of matrices similar:

$\begin{pmatrix}
{-2}&{1}&{0}\\
{0}&{-2}&{1}\\
{0}&{0}&{-1}
\end{pmatrix}$
and $\begin{pmatrix}
{-2}&{0}&{0}\\
{0}&{-2}&{0}\\
{0}&{-1}&{-1}
\end{pmatrix}$
?
I found the eigenvalues of both matrices. The characteristic equation for both matrices is $(\lambda+2)^2(-1-\lambda)$.

Hence the eigenvalues are -2 and -1 for both matrices.

Since the eigenvalues are the same for both matrices i'm pretty sure they're similar. However, the presence of a repeated eigenvalue (-2) makes me think they might not be.

Can anyone clarify this situation for me?

2. You are right to be hesitant. Having the same spectrum is a necessary, but not sufficient condition for similarity.

To show these two aren't equal, let $A$ be the first matrix, and $B$ be the second. Suppose $A=SBS^{-1}$ for some invertible S.
Then we can adjust both sides by a scalar matrix and preserve similarity, i.e. for some scalar alpha, $S(B-\alpha I)S^{-1}=SBS^{-1}-S\alpha IS^{-1}=A-\alpha I$.
Then in particular, no matter how we shift the diagonal the ranks must stay the same on both sides of the equation since similarity preserves rank.
But if we take $\alpha=-2$, $A+2I$ obviously has rank 2 while $B+2I$ has rank 1, thus a contradiction is reached and these two matrices are not similar.

If you are curious where this type of argument comes from, look up the Jordan Canonical Form. It is essentially the "best" representative from each similarity equivalence class, in that its structure makes obvious eigenvalue and eigenvector degeneracies and such rank arguments are essential to understanding that structure.

3. I understand what you've done for the first part, it's just that I get a little sketchy on what's going on towards the end =S

Why does the rank of $B+2I_3$ remain unchanged when it is multiplied by $S^{-1}$ and $S$?

4. This is because S, being invertible, has full rank.
Qualitatively speaking, multiplying a matrix by a full rank matrix corresponds to a sequence of row or column operations.
More explicitly, the rank of an n by n matrix is n-k where k is the dimension of the kernel, right?
Well if A is rank n, B has rank r, then the kernel of B lies in the kernel of AB right? (Bx=0 implies ABx=A(0)=0).
Furthermore, no additional vectors can map to zero by A (ABx=0 implies Bx=0).
Thus the kernel is in fact the same, and the rank of AB is the rank of B.
A similar argument yields the same thing for multiplication on the right by a full rank matrix.
Therefore what we are using is that the rank of $MAN$ equals the rank of $A$ for any choice of full rank matrices M,N.
Does that clarify the argument?