1. ## Basis and Eigenvalues

V is an n dimensional vector space over F. A is a linear transformation from V to itself.

Prove that if V has a basis of eigenvectors for A, then the matrix representing A with respect t this basis is diagonal with the eigenvalues as diagonal entries.

Prove that with respect to an arbitrary basis for V, A is similar to a diagonal matrix if and only if V has a basis of eigenvectors for A.

2. Originally Posted by robeuler
V is an n dimensional vector space over F. A is a linear transformation from V to itself.

Prove that if V has a basis of eigenvectors for A, then the matrix representing A with respect t this basis is diagonal with the eigenvalues as diagonal entries.

Prove that with respect to an arbitrary basis for V, A is similar to a diagonal matrix if and only if V has a basis of eigenvectors for A.
Let $\displaystyle B=\{v_1,v_2,...,v_n\}$ be the set of eigenvectors for $\displaystyle A$ which form a basis for $\displaystyle V$. By definition $\displaystyle Av_j = \ell_j v_j$ for some $\displaystyle \ell_j \in F$. Therefore, $\displaystyle [Av_j]_B = (0,0,...,\ell_j,...0)$ where $\displaystyle \ell_j$ appears in the $\displaystyle j$-coordinate. Therefore, the matrix corresponding to $\displaystyle A$ with respect to this basis is:
$\displaystyle \begin{bmatrix} \ell_1 & 0 & ... & 0 \\ 0 & \ell_2 & ... & 0 \\ ...&...&...&...\\0& 0 & ... & \ell_n \end{bmatrix}$

3. Originally Posted by ThePerfectHacker
Let $\displaystyle B=\{v_1,v_2,...,v_n\}$ be the set of eigenvectors for $\displaystyle A$ which form a basis for $\displaystyle V$. By definition $\displaystyle Av_j = \ell_j v_j$ for some $\displaystyle \ell_j \in F$. Therefore, $\displaystyle [Av_j]_B = (0,0,...,\ell_j,...0)$ where $\displaystyle \ell_j$ appears in the $\displaystyle j$-coordinate. Therefore, the matrix corresponding to $\displaystyle A$ with respect to this basis is:
$\displaystyle \begin{bmatrix} \ell_1 & 0 & ... & 0 \\ 0 & \ell_2 & ... & 0 \\ ...&...&...&...\\0& 0 & ... & \ell_n \end{bmatrix}$
Thank you! For the second part...
for (=>) I know that there exists a P so that PAP^-1= D, where D is a diagonal matrix. I feel like if V did not have a basis of eigenvectors of A, then such a P should not exist, but I know I'm missing something.

for (<=) can I explicitly construct a P using the eigenvalue-diagonal matrix from the part you solved?

4. Originally Posted by robeuler
Thank you! For the second part...
for (=>) I know that there exists a P so that PAP^-1= D, where D is a diagonal matrix. I feel like if V did not have a basis of eigenvectors of A, then such a P should not exist, but I know I'm missing something.
I think the following observation will help thee. Let $\displaystyle A$ be an $\displaystyle n\times n$ matrix and $\displaystyle P$ an $\displaystyle n\times n$ invertible matrix so that $\displaystyle PAP^{-1} = D$ where $\displaystyle D$ is a diagnol matrix with $\displaystyle ii$-th entry $\displaystyle \ell_i$. The observation is that the $\displaystyle i$-th coloumn of $\displaystyle P$ is an eigenvalue of $\displaystyle A$ with eigenvalue $\displaystyle \ell_i$.

Before we prove this let $\displaystyle A = (a_{ij})$ with $\displaystyle P=(p_{ij})$ and with $\displaystyle D = (d_{ij})$, note that $\displaystyle d_{ij} = 0$ if $\displaystyle i\not = j$ and $\displaystyle d_{ii} = \ell_i$. Since $\displaystyle PAP^{-1} = D\implies PA = DP$. By the definition of matrix product we know $\displaystyle PA = \left( \Sigma_k p_{ik}a_{kj} \right) = \left( \Sigma_k d_{ik}p_{kj} \right)$. However, $\displaystyle \Sigma_k d_{ik}p_{kj} = (\ell_ip_{ij})$. Therefore, $\displaystyle \left( \Sigma_k p_{ik}a_{kj} \right) = (\ell_ip_{ij})$ which means for any particular $\displaystyle 1\leq i\leq n$ we have $\displaystyle \Sigma_k p_{ik}a_{kj} = \ell_i p_{ij}$. Thus, we can form the following matrix equality (just set $\displaystyle j=1,2,...,n$):
$\displaystyle \begin{bmatrix} \Sigma_k p_{ik}a_{k1} \\ \Sigma_k p_{ik}a_{k2}\\ ... \\ \Sigma_k p_{ik}a_{kn} \end{bmatrix} = \begin{bmatrix} \ell_i p_{i1} \\ \ell_i p_{i2} \\ ... \\ \ell_i p_{in} \end{bmatrix}\implies$$\displaystyle \begin{bmatrix} a_{11} & a_{21} & ... & a_{n1}\\ a_{12} & a_{22} & ... & a_{n2} \\ ... & ... & ... & ... \\ a_{1n} & a_{2n} & ... & a_{nn} \end{bmatrix}\begin{bmatrix}p_{i1}\\p_{i2} \\ ... \\ p_{in} \end{bmatrix} = \ell_i \begin{bmatrix}p_{i1} \\ p_{i2} \\ ... \\ p_{in}\end{bmatrix}$

Thus, $\displaystyle A^T\bold{p}_{(i)} = \ell_i \bold{p}_{(i)}$ where $\displaystyle \bold{p}_{(i)}$ is the $\displaystyle i$-th row.
Hence, $\displaystyle A\bold{p}_i = \ell_i \bold{p}_i$ where $\displaystyle \bold{p}_i$ is the $\displaystyle i$-th coloumn.