# Thread: eigenvalues and eigenvectors theory

1. ## eigenvalues and eigenvectors theory

Thanks to all in advance for looking at this problem.
Given A is an unspecified 3 x 3 matrix that has eigenvlaues 2, -2, square root of 3 and the corresponding eigenvectors are X1, X2, X3.

a) Find the characteristic polynomial of A.
b) Find a set of three linearly independent eigenvectors of A^2
c) Find the characteristic polynomial of A^2

2. ## Re: eigenvalues and eigenvectors theory

Originally Posted by nivek0078
Thanks to all in advance for looking at this problem.
Given A is an unspecified 3 x 3 matrix that has eigenvlaues 2, -2, square root of 3 and the corresponding eigenvectors are X1, X2, X3. a) Find the characteristic polynomial of A.
$\chi_A(\lambda)=(\lambda-2)(\lambda+2)(\lambda-\sqrt{3})$

b) Find a set of three linearly independent eigenvectors of A^2
$A^2X_1=A(AX_1)=A(2X_1)=2(AX_1)=2(2X_1)=4X_1.$ In a similar way, $A^2X_2=4X_2$ and $A^2X_3=3X_3.$ Besides, $\{X_1,X_2,X_3\}$ are linearly independent (Why?).

c) Find the characteristic polynomial of A^2
Using b), $\chi_{A^2}(\lambda)=(\lambda-4)^2(\lambda-3).$

3. ## Re: eigenvalues and eigenvectors theory

FernandoRevilla has provided the answers - and that's the way you should understand this problem. But I want to point out a kind of "cheating" hint that applies to problems phrased this way - at least for parts a and c.

Observe that the problem, when given on a test, presumably has an answer. So you know that the characteristic polynomial of A will be the same **regardless** the specifics of A, so long as A is 3x3 with those 3 eigenvalues. You know it not for a math reason (though there of course is one), but for a "I trust my teacher to have not given me a test that has errors" reason.

Thus choose a super simple A that has those three eigenvalues, and read off its characteristic polynomial.

How about $A = \begin{pmatrix} 2& 0& 0 \\ 0& -2& 0 \\ 0& 0& \sqrt{3} \end{pmatrix}$.

That satisfies the problem's assumptions about A. Its characteristic polynomial is trivial to determine. It's also trivial to compute A-squared and find its characteristic polynomial.

This is just a trick. This isn't how you learn math, which requires understanding, not trickery. But this can come in handy on a test.

----

Algebra I example of the same trick:

Test Question: "A product is marked down 20%. By what percentage must the product's newly reduced price be increased to return the product to its original value?"

The nature of the question implies that the actual value of the product's original price doesn't matter - the answer will be the same whether the product originally cost $24.99 or$500.00 - or any other value.. That's true, and there's a mathematical reason for it, but on the test it's possible to assume it's true without knowing reason, simply because the problem itself only makes sense if it's true. That's not ideal - you should know why it's true - but it is a useful trick when confused and pressured on an exam.

Since the problem gives you the same answer regardless the product's original price, do the problem with a fixed price for the product. Something easy given the nature of the problem. Since this problem is about percentages, choose Price = $100.00, and then wotk the problem more concretely using that price.$100 marked down 20% = new price of $80. To increase from$80 back to the original $100 is a$20 increase. Then $20/$80 = 1/4 = 25%. Thus the reduced price must be increased 25% to return to the product's original value.

4. ## Re: eigenvalues and eigenvectors theory

this is true....but it obscures one of the "neat things about eigenvalues".

fact: if we know the eigenvalues for A, we know eigenvalues for Ak, for any natural number k (and if A is invertible, for any integer).

why?

let v be an eigenvector (which we don't even need to know) for the eigenvalue λ:

then A2v = A(Av) = A(λv) = λ(Av) = λ(λv) = λ2v.

by induction, we have λk is an eigenvalue for Ak.

if A is invertible, 0 is not an eigenvalue (can you see why?).

thus v = Iv = A-1(Av) = A-1(λv) = λ(A-1v), so A-1v = (1/λ)v

what eigenvectors DO, for a matrix A, is "make it act like a scalar" on the eigenvector-axis. IF we make the eigenvectors our "new axes" (and IF we have enough linearly independent ones), then in that basis, A is diagonal (we don't get a "scalar matrix" that is: cI, for some c, because we might have to use "different scales on different axes").

one of the reasons why the identity matrix IS the identity matrix, is because any basis is an eigenbasis. there's no "distortion" (no re-scaling in any direction). if a matrix is diagonalizable, the eigenvalues let us "see" what A "does to the space" (it stretches the first eigenbasis vector by the first eigenvalue, the second eigenbasis vector by the second eigenvalue, and so on). as far as the vector space is concerned, one basis is as good as another. it's only us poor mortals, who need to turn "vectors into coordinates" (numericize them) who need "simple forms" for our linear transformations, because we can't do field operations as fast as the gods can.

in other words, eigenvalues/eigenvectors do not exist "to make life hard for linear algebra students", but to make things EASIER. learn to love them.

5. ## Re: eigenvalues and eigenvectors theory

Thanks to all of you that was very in-depth answers that sound logical sense when its presented that way!

My next question deals alone the same topic. The question states: Can R^3 have a basis of eigenvectors all of which have 0 in their first components, why or why not?

6. ## Re: eigenvalues and eigenvectors theory

Originally Posted by nivek0078
Can R^3 have a basis of eigenvectors all of which have 0 in their first components, why or why not?
"in their first components" implicitly refers to some unstated basis.

No doubt the standard basis, $\{\hat{i}, \hat{j}, \hat{k} \}$, is what's intended, though that's not actually stated.

My suggestion is to forget about eigenvectors, and ask yourself if any basis can have that property.

Is there a basis $\{\vec{b_1}, \vec{b_2}, \vec{b_3} \}$ of $\mathbb{R}^3$ such that

$<\vec{b_1}, \hat{i}> = <\vec{b_2}, \hat{i}> = <\vec{b_3}, \hat{i}> = 0$ ?

-------

Since the topic is linear algebra, reconsider "basis of eigenvectors" by maybe thinking about these:

Let $\mathcal{B} = \{\vec{b_1}, \vec{b_2}, \vec{b_3} \}$ be any basis for $\mathbb{R}^3$:

1) Does there always exist a linear map, that isn't a multiple of the multiple of the identity, $L: \mathbb{R}^3 \rightarrow \mathbb{R}^3$,

such that $\mathcal{B} \subset Eigenvectors(L)?$

To make life more interesting, one where L has distinct eigenvalues?

To make life more interesting, one where L has exactly 2 distinct eigenvalues?

(How many distinct eigenvalues do multiples of the identity have (i.e linear maps of the form cI for some real c)? What are they?)

2) When such an L exists as in #1, can you specify one concretely (meaning concretely in terms of the $\mathcal{B}$)?

3) When such an L exists as in #1, is it sometimes, always, or never, invertible - and why/what decides that?

7. ## Re: eigenvalues and eigenvectors theory

Still confused on the answer to the question can a R^3 have a basis of eignenvectors all of which have 0 in their first components. After the reading the statement above I'm saying no but I'm still very unsure. Can someone please give me a direct answer as to why or why not.

8. ## Re: eigenvalues and eigenvectors theory

By definition of "basis", any basis spans the whole space. If no element in a basis has a non-zero component in some (non-zero) vector's direction, then that vector won't be in the span of the basis vectors, which would mean that you actually didn't have a basis to begin with.

If $\mathcal{B} = \{\vec{b_1}, \vec{b_2}, \vec{b_3} \}$ is a basis for $\mathbb{R}^3$,

and $\vec{v} \in \mathbb{R}^3, \vec{v} \ne \vec{0}$, then (notation: < , > = dot product)

$<\vec{b_1}, \vec{v}> = <\vec{b_2}, \vec{v}> = <\vec{b_3}, \vec{v}> = 0$ is impossible, because

then $\vec{v}$ would not be in the span of $\mathcal{B}$, contrary to $\mathcal{B}$ being a basis.

----
To see that, imagine if your basis vectors looked something like, when written in coordinate form: (0, 3, -1), (0, 1, -2), (0, 5, 5).

Then when you looked at vectors in their linear span (i.e. added multiples of them together), they would all look like (0, something, something).

Thus those 3 vectors could never span all of $\mathbb{R}^3$, because their span could never include (1,0,0).

(Nor would it include (-3.608, 52.3, -98.774), nor, well, (anything non-zero, anything, anything)).

Therefore, it actually couldn't have been a basis in the first place.

----
To prove that, suppose $\vec{v} \in \mathbb{R}^3, \vec{v} \ne \vec{0}$, and $<\vec{b_1}, \vec{v}> = <\vec{b_2}, \vec{v}> = <\vec{b_3}, \vec{v}> = 0$.

Since $\mathcal{B}$ is a basis, it spans $\mathbb{R}^3$, and so there exists real numbers $c_1, c_2, c_3$ such that

$\vec{v} = c_1\vec{b_1} + c_2\vec{b_2} + c_3\vec{b_3}$.

Therefore have:

$\lVert \vec{v} \rVert^2 = <\vec{v}, \vec{v}> = <(c_1\vec{b_1} + c_2\vec{b_2} + c_3\vec{b_3}), \vec{v}>$

$= c_1<\vec{b_1}, \vec{v}> + c_2<\vec{b_2}, \vec{v}> + c_3<\vec{b_3}, \vec{v}> = c_1(0) + c_2(0) + c_3(0) = 0$.

But $\lVert \vec{v} \rVert^2 = 0$ contradicts the choice of $\vec{v} \ne \vec{0}$.

----
Remember, a set of vectors is a *basis* EXACTLY when both these two conditions hold: 1) they're linearly independent and 2) they span the space.