# find nondiagonal matrix whose eigenvalues and eigenvectors are given

• Apr 23rd 2010, 04:33 PM
superdude
[RESOLVED]find nondiagonal matrix whose eigenvalues and eigenvectors are given
Find a 2x2 nondiagonal matrix whose eigenvalues are 2 and -3 and associated eigenvectors are
$\displaystyle \left[ {\begin{array}{c} -1 \\ 2 \\ \end{array} } \right]$ and $\displaystyle \left[ {\begin{array}{c} 1 \\ 1 \\ \end{array} } \right]$ respectivley.

Let $\displaystyle D=\left[ {\begin{array}{cc} 2 & 0 \\ 0 & -3 \\ \end{array} } \right]$ and $\displaystyle P=\left[ {\begin{array}{cc} -1 & 1\\ 2 & 1 \\ \end{array} } \right]$
Then $\displaystyle P^{-1}AP=D$ so $\displaystyle A =PDP^{-1}=\frac{1}{3} \left[ {\begin{array}{cc} -4 & -5 \\ -10 & 1 \\ \end{array} } \right]$

I don't see the intuition behind the first part, how do they know to come up with D and P by slamming together the eigenvectors and eigen values?
• Apr 24th 2010, 01:37 AM
tonio
Quote:

Originally Posted by superdude
Find a 2x2 nondiagonal matrix whose eigenvalues are 2 and -3 and associated eigenvectors are
$\displaystyle \left[ {\begin{array}{c} -1 \\ 2 \\ \end{array} } \right]$ and $\displaystyle \left[ {\begin{array}{c} 1 \\ 1 \\ \end{array} } \right]$ respectivley.

Let $\displaystyle D=\left[ {\begin{array}{cc} 2 & 0 \\ 0 & -3 \\ \end{array} } \right]$ and $\displaystyle P=\left[ {\begin{array}{cc} -1 & 1\\ 2 & 1 \\ \end{array} } \right]$
Then $\displaystyle P^{-1}AP=D$ so $\displaystyle A =PDP^{-1}=\frac{1}{3} \left[ {\begin{array}{cc} -4 & -5 \\ -10 & 1 \\ \end{array} } \right]$

I don't see the behind the first part, how do they know to come up with D and P by slamming together the eigenvectors and eigen values?

We know that if $\displaystyle A$ is a square matrix which eigenvectors form a basis of the given vector space, then there exists an invertible matrix $\displaystyle P$ , which its

columns are eigenvectors of $\displaystyle A$ , s.t. $\displaystyle P^{-1}AP=D$ is a diagonal matrix whose diagonal elements are precisely the eigenvalues of $\displaystyle A$ (all this is classic theory, not

something new). Well, they just did all the process backwards...(Wink)

Tonio
• Apr 24th 2010, 03:39 PM
superdude
Quote:

Originally Posted by tonio
We know that if $\displaystyle A$ is a square matrix which eigenvectors form a basis of the given vector space, then there exists an invertible matrix $\displaystyle P$ , which its

columns are eigenvectors of $\displaystyle A$

Tonio

How do you know which eigenvectors are which column? Does it matter? So in my example if I found the eigenvectors how would I know it wouldn't be $\displaystyle D=\left[ {\begin{array}{cc} 0 & 2 \\ -3 & 0 \\ \end{array} } \right]$
Is it because we start with the smallest values of $\displaystyle \lambda$ and work our way up and the eigenvectors found get put into the matrix D left from right? If this is true, then what happens if 2 eigenvectors are associated with one eigenvalue (is this even possible)?
• Apr 24th 2010, 06:22 PM
tonio
Quote:

Originally Posted by superdude
How do you know which eigenvectors are which column? Does it matter? So in my example if I found the eigenvectors how would I know it wouldn't be $\displaystyle D=\left[ {\begin{array}{cc} 0 & 2 \\ -3 & 0 \\ \end{array} } \right]$

This matrix isn't diagonal so it cannot be in our case , and if the first eigenvalue appears in D then the first column of the invertible matrix P will be an eigenvector corresponding to this eigenvalue...

Tonio

Is it because we start with the smallest values of $\displaystyle \lambda$ and work our way up and the eigenvectors found get put into the matrix D left from right? If this is true, then what happens if 2 eigenvectors are associated with one eigenvalue (is this even possible)?

.
• Apr 25th 2010, 03:49 AM
HallsofIvy
Quote:

Originally Posted by superdude
How do you know which eigenvectors are which column? Does it matter? So in my example if I found the eigenvectors how would I know it wouldn't be $\displaystyle D=\left[ {\begin{array}{cc} 0 & 2 \\ -3 & 0 \\ \end{array} } \right]$
Is it because we start with the smallest values of $\displaystyle \lambda$ and work our way up and the eigenvectors found get put into the matrix D left from right? If this is true, then what happens if 2 eigenvectors are associated with one eigenvalue (is this even possible)?

You are confusing "eigenvalues" with "eigenvectors". D is the diagonal matrix having the eigenvalues on the diagonal. Swapping eigenvectors would give you $\displaystyle P= \begin{bmatrix}1 & -1 \\ 1 & 2\end{bmatrix}$. If you kept the same the same D matrix, you would get a matrix having the same eigenvalues and eigenvectors but with the eigenvector now corresponding to the othere eigenvalue.

If you swap both, say $\displaystyle P= \begin{bmatrix}1 & -1 \\ 1 & 2\end{bmatrix}$ and $\displaystyle D= \begin{bmatrix}-3 & 0 \\ 0 & 2\end{bmatrix}$ then you would get another matrix having the same eigenvalues and the same eigenvectors corresponding to those eigenvalues as the first matrix.

Choose D to be the diagonal matrix having the eigenvalues on the diagonal and P to be the matrix having the corresponding eigenvectors as columns- the first column is an eigenvector for the eigenvalue in the first column, the second column of P corresponds to the eigenvalue in the second column of D, etc.