1. ## eigenvector

can some one explain to me why if v is in V with cordinate vector x in real ^n then v is an
eigenvector for f with eigenvalue λ iff x is an eigenvector for A_f with eigenvalue λ where
A_f is the matrix of A wrt an arbitary basis.

2. Originally Posted by alexandrabel90
can some one explain to me why if v is in V with cordinate vector x in real ^n then v is an
eigenvector for f with eigenvalue λ iff x is an eigenvector for A_f with eigenvalue λ where
A_f is the matrix of A wrt an arbitary basis.

What does "v is in V with cordinate vector x in ^n" mean??

Tonio

3. Originally Posted by alexandrabel90
can some one explain to me why if v is in V with cordinate vector x in real ^n then v is an
eigenvector for f with eigenvalue λ iff x is an eigenvector for A_f with eigenvalue λ where
A_f is the matrix of A wrt an arbitary basis.
I guess you meant:

(i) $\displaystyle V$ real vector space (ii)$\displaystyle f:V\rightarrow V$ endomorphism, (iii) $\displaystyle B=\{e_1,\ldots,e_n\}$ basis of $\displaystyle V$ (iv) $\displaystyle v\in V$ such that

$\displaystyle x=\begin{pmatrix}x_1\\ \vdots\\{x_n}\end{pmatrix} \quad (v=x_1e_1+\ldots+x_ne_n)$

(v) $\displaystyle A_f$ the matrix of $\displaystyle f$ with respect to $\displaystyle B$

In that case, you have to prove $\displaystyle f(v)=\lambda v \Leftrightarrow A_{f}x=\lambda x$ which is deduced inmediately using the meaning of matrix of a linear map with respect to a determined basis.

Fernando Revilla

4. The point of this is, of course, that the concept of an "eigenvalue" is independent of the particular matrix representation you choose for a linear transformation- that "eigenvalue" is really a "Linear Algebra" concept and not just a "Matrix Algebra" concept.

The simplest way to prove this is to use the fact that the matrices representing a linear transformation in different bases are "similar". That is, that A and B represent the same linear transformation, in different bases, if and only if there exist an invertible matrix, C, such that $\displaystyle A= CBC^{-1}$. (C is the "change of basis" matrix, the matrix that maps the vectors in one basis into the vectors in the other basis.)

Recall that the eigenvalues of A are the solutions to "characteristic equation", $\displaystyle \left|A- \lambda I\right|= 0$. Since, for any invertible C, $\displaystyle I= CIC^{-1}$, we can write that as $\displaystyle \left|A- \lambda I\right|= \left|CBC^{-1}- \lambda CIC^{-1}\right|= \left|C(B- \lambda I)C^{-1}\right|$$\displaystyle = \left| C\right|\left|B- \lambda I\right|\left|C^{-1}\right|= \left|B- \lambda I\right|= 0$.

5. We can prove the statement using a previous result to the eigenvalues theory:

If $\displaystyle V$ is a vector space over $\displaystyle \mathbb{K}$ , $\displaystyle \dim V=n$ finite, $\displaystyle B$ is a fixed basis of $\displaystyle V$ and $\displaystyle A_f$ is the matrix of $\displaystyle f$ with respect to $\displaystyle B$ then,

$\displaystyle Y=A_f X$

where,

$\displaystyle X=\begin{bmatrix}x_1\\ \vdots\\{x_n}\end{bmatrix}\; ,\quad Y=\begin{bmatrix}y_1\\ \vdots\\{y_n}\end{bmatrix}$

are the coordinates of $\displaystyle x\in V$ and $\displaystyle f(x)\in V$ respectively, on $\displaystyle B$ .

Fernando Revilla