# Thread: Test for diagonalizability of matrix and linear operator

1. ## Test for diagonalizability of matrix and linear operator

I am using the book "Linear Algebra 4th Edition" by Stephen H. Friedberg, Arnold J. Insel, and Lawrence E. Spence. It gives this corollary: "Let T be a linear operator on an n-dimensional vector space V. If T has n distinct eigenvalues, then T is diagonalizable."

I am attaching a scan of my work and I am wondering if a linear operator can be interpreted the same way as a matrix can. If so, does that mean that each of the eigenvalues has to be a different number (i.e. "distinct" eigenvalue)? I got 3, 3, and -1 as the eigenvalues for the first matrix I tried and then computed the eigenvectors which in turn became the column vectors of the matrix Q. I found Q to be non-invertible and got stuck there. Thanks for the help.

2. In finite-dimensional vector spaces, which is what you have, linear operators ARE matrices (at least, there's a bijection that preserves all the necessary mathematical properties). The same cannot be said about infinite-dimensional spaces.

Having n distinct eigenvalues is a sufficient, but not necessary, condition for diagonalizability. The necessary condition is that, for every distinct eigenvalue, the corresponding eigenspace has the same dimension as the algebraic multiplicity of the eigenvalue. You can look up all those terms on the English wikipedia.

3. A linear transform, from and n dimensional vector space to itself, is "diagonalizable" if and only if there exist n independent eigenvectors, no matter how many distinct eigenvalues. Ackbeet said that "In finite-dimensional vector spaces, linear operators ARE matrices (at least, there's a bijection that preserves all the necessary mathematical properties)". That is certainly true but I would say, rather, than any linear transformation, from an n-dimensional vector space to an m-dimensional vector space, can be represented by an m by n matrix.

4. Originally Posted by Ackbeet
In finite-dimensional vector spaces, which is what you have, linear operators ARE matrices (at least, there's a bijection that preserves all the necessary mathematical properties). The same cannot be said about infinite-dimensional spaces.

Having n distinct eigenvalues is a sufficient, but not necessary, condition for diagonalizability. The necessary condition is that, for every distinct eigenvalue, the corresponding eigenspace has the same dimension as the algebraic multiplicity of the eigenvalue. You can look up all those terms on the English wikipedia.

I know these terms because I passed the first and second Linear courses I took, but they aren't fresh in my mind. I would have to do a lot of review. All I wanted with the images I have attached is that they are inspected and someone can tell me whether I am using the right approach to arrive at an equation like this: Q^-1*A*Q = D, where D is a diagonal matrix. I got up to trying to invert Q, but couldn't.

5. In looking at your work, there's nothing wrong except one thing: you can find another eigenvector associated with the t = 3 eigenvalue. And you'll need that in order to diagonalize the matrix.

6. Originally Posted by Ackbeet
In looking at your work, there's nothing wrong except one thing: you can find another eigenvector associated with the t = 3 eigenvalue. And you'll need that in order to diagonalize the matrix.
I don't see the other eigenvector associated with the t= 3 eigenvalue. If you can tell me what that eigenvector is, then I'll be able to take it from there. I do have a bit of confusion about how to derive eigenvectors which I will see my professor about (i.e. x = y, y = y, z = 0 and you remove the y to get the eigenvector ( 1 1 0)). Where's the other eigenvector? Thanks.

7. Originally Posted by Undefdisfigure
I don't see the other eigenvector associated with the t= 3 eigenvalue. If you can tell me what that eigenvector is, then I'll be able to take it from there. I do have a bit of confusion about how to derive eigenvectors which I will see my professor about (i.e. x = y, y = y, z = 0 and you remove the y to get the eigenvector ( 1 1 0)). Where's the other eigenvector? Thanks.
We have:

$E_3 \equiv\begin{Bmatrix}4x-4y=0\\8x-8y=0\\6x-6y=0\end{matrix} \sim \begin{Bmatrix}x-y=0\end{matrix}$

A basis of $E_3$ is:

$B=\{(1,1,0),(0,0,1)\}$

Fernando Revilla