1. ## Proof about eigenvalues/vectors of a linear transformation

I had to say whether the following affirmation was true or not and explain. I fell in love with the proof I gave : (I hope there's no flaws in it)
Let $\displaystyle T$ be a linear transformation and let $\displaystyle v_1$ and $\displaystyle v_2$ be 2 eigenvectors of $\displaystyle T$ with eigenvalues $\displaystyle \lambda _1$ and $\displaystyle \lambda _2$ respectively. If $\displaystyle \lambda _1 \neq \lambda _2 \Rightarrow \{ v_1,v_2 \}$ is linear independent.
I say true : Proof :
I have to prove $\displaystyle A\Rightarrow B$, so I'll prove $\displaystyle \neg B \Rightarrow \neg A$ which is equivalent. In other words I have to prove that $\displaystyle \{ v_1, v_2 \}$ linear dependent implies $\displaystyle \lambda _1 = \lambda _2$.
So we have $\displaystyle \{ v_1,v_2 \}$ linear dependent $\displaystyle \Rightarrow \exists c \neq 0$ such that $\displaystyle v_1=cv_2$.
We have that $\displaystyle Tv_1=\lambda _1 v_1 \Leftrightarrow Tcv_2=\lambda _1 v_1 \Leftrightarrow Tv_2= \frac{\lambda _1 v_1}{c}=\lambda _2 v_2 \Leftrightarrow \lambda _1 v_2 = \lambda _2 v_2 \Leftrightarrow \lambda _1 = \lambda _2 \square$.
Is there any flaw?

2. Originally Posted by arbolis
I had to say whether the following affirmation was true or not and explain. I fell in love with the proof I gave : (I hope there's no flaws in it)
Let $\displaystyle T$ be a linear transformation and let $\displaystyle v_1$ and $\displaystyle v_2$ be 2 eigenvectors of $\displaystyle T$ with eigenvalues $\displaystyle \lambda _1$ and $\displaystyle \lambda _2$ respectively. If $\displaystyle \lambda _1 \neq \lambda _2 \Rightarrow \{ v_1,v_2 \}$ is linear independent.
I say true : Proof :
I have to prove $\displaystyle A\Rightarrow B$, so I'll prove $\displaystyle \neg B \Rightarrow \neg A$ which is equivalent. In other words I have to prove that $\displaystyle \{ v_1, v_2 \}$ linear dependent implies $\displaystyle \lambda _1 = \lambda _2$.
So we have $\displaystyle \{ v_1,v_2 \}$ linear dependent $\displaystyle \Rightarrow \exists c \neq 0$ such that $\displaystyle v_1=cv_2$.
We have that $\displaystyle Tv_1=\lambda _1 v_1 \Leftrightarrow Tcv_2=\lambda _1 v_1 \Leftrightarrow Tv_2= \frac{\lambda _1 v_1}{c}=\lambda _2 v_2 \Leftrightarrow \lambda _1 v_2 = \lambda _2 v_2 \Leftrightarrow \lambda _1 = \lambda _2 \square$.
Is there any flaw?
it's correct! at the end of your proof you should mention that $\displaystyle \lambda _1 v_2 = \lambda _2 v_2$ implies $\displaystyle \lambda_1=\lambda_2,$ because $\displaystyle v_2 \neq 0$ (eigenvectors are always non-zero).

3. Originally Posted by NonCommAlg
it's correct! at the end of your proof you should mention that $\displaystyle \lambda _1 v_2 = \lambda _2 v_2$ implies $\displaystyle \lambda_1=\lambda_2,$ because $\displaystyle v_2 \neq 0$ (eigenvectors are always non-zero).
Ah yes, I assumed it. Glad it works.
It's worth notice that here we have $\displaystyle A \Rightarrow B$ and not $\displaystyle A \Leftrightarrow B$. As a counter example of the latter, taking $\displaystyle v_1=(1,0)$ and $\displaystyle v_2=(0,1)$, they are clearly linear independent, however if $\displaystyle T$ is the identity transformation, then $\displaystyle \lambda _1= \lambda _2$, contradicting the hypothesis.

4. I attempted to prove it directly by showing that if $\displaystyle \alpha v_{1} + \beta v_{2} = 0$ , then $\displaystyle \alpha$ and $\displaystyle \beta$ must both be zero. I didn't get very far. Proof by contradiction seems like the way to go.

5. Originally Posted by Random Variable
I attempted to prove it directly by showing that if $\displaystyle \alpha v_{1} + \beta v_{2} = 0$ , then $\displaystyle \alpha$ and $\displaystyle \beta$ must both be zero. I didn't get very far. Proof by contradiction seems like the way to go.
Apply $\displaystyle T$ to your equation, it gives $\displaystyle \alpha \lambda_1 v_1+\beta\lambda_2v_2=0$. Together with the first one ($\displaystyle \beta v_2=-\alpha v_1$), we deduce $\displaystyle \alpha (\lambda_1-\lambda_2) v_1=0$. Since $\displaystyle v_1\neq 0$ and $\displaystyle \lambda_1\neq\lambda_2$, we must have $\displaystyle \alpha=0$. Then $\displaystyle \beta=0$ from initial equation and the fact that $\displaystyle v_2\neq 0$.

6. Originally Posted by arbolis
I had to say whether the following affirmation was true or not and explain. I fell in love with the proof I gave : (I hope there's no flaws in it)
Let $\displaystyle T$ be a linear transformation and let $\displaystyle v_1$ and $\displaystyle v_2$ be 2 eigenvectors of $\displaystyle T$ with eigenvalues $\displaystyle \lambda _1$ and $\displaystyle \lambda _2$ respectively. If $\displaystyle \lambda _1 \neq \lambda _2 \Rightarrow \{ v_1,v_2 \}$ is linear independent.
I say true : Proof :
I have to prove $\displaystyle A\Rightarrow B$, so I'll prove $\displaystyle \neg B \Rightarrow \neg A$ which is equivalent. In other words I have to prove that $\displaystyle \{ v_1, v_2 \}$ linear dependent implies $\displaystyle \lambda _1 = \lambda _2$.
So we have $\displaystyle \{ v_1,v_2 \}$ linear dependent $\displaystyle \Rightarrow \exists c \neq 0$ such that $\displaystyle v_1=cv_2$.
We have that $\displaystyle Tv_1=\lambda _1 v_1 \Leftrightarrow Tcv_2=\lambda _1 v_1 \Leftrightarrow Tv_2= \frac{\lambda _1 v_1}{c}=\lambda _2 v_2 \Leftrightarrow \lambda _1 v_2 = \lambda _2 v_2 \Leftrightarrow \lambda _1 = \lambda _2 \square$.
Is there any flaw?
the general case is much more interesting: suppose $\displaystyle v_1, \cdots , v_n$ are $\displaystyle n$ eigenvectors with corresponding pairwise distinct eigenvalues $\displaystyle \lambda_1, \cdots , \lambda_n.$ then $\displaystyle v_1, \cdots , v_n$ are linearly independent:

the proof is by induction over $\displaystyle n$: there's nothing to prove for n = 1. so suppose the claim is true for $\displaystyle n-1$ and let $\displaystyle c_1v_1 + \cdots + c_nv_n=0,$ for some scalars $\displaystyle c_j.$ call this (1). we want to prove that

$\displaystyle c_1= \cdots = c_n = 0$ : from (1) we have $\displaystyle c_1\lambda_1v_1 + \cdots + c_n \lambda_n v_n=c_1T(v_1) + \cdots + c_nT(v_n)=T(c_1v_1 + \cdots + c_nv_n)=T(0)=0.$ call this (2). now multiply (1) by $\displaystyle \lambda_1$ and then subtract the result from (2)

to get: $\displaystyle c_2(\lambda_2-\lambda_1)v_2 + \cdots + c_n(\lambda_n - \lambda_1)v_n=0.$ hence by induction hypothesis we must have $\displaystyle c_2(\lambda_2 - \lambda_1) = \cdots = c_n(\lambda_n-\lambda_1)=0,$ which gives us $\displaystyle c_2= \cdots = c_n = 0,$ because $\displaystyle \lambda_j$ are assumed

to be pairwise distinct. so $\displaystyle c_1=0$ by (1). this completes the proof.