Self-adjoint linear transformations and eigenvalues.

Hi all, I was studying for a Math test until this question stumped me:

Let be an inner product space and be a self-adjoint linear transformation such that .

a) Show that all eigenvalues of are either 0 or 1.

b) Describe the eigenspaces of in terms of the kernel of , the range of and .

So for question a) I know that a self-adjoint linear transformation means and where is a scalar but I don't know how to use these to solve the question...

As for b) I know that Nullity T + Rank T = Dim V which is equivalent to dim(ker T) + dim(im T) = dim V ...but I guess I can't solve this until I know how to do question a)...

Any help would be greatly appreciated.

Re: Self-adjoint linear transformations and eigenvalues.

if T^{2} = T, then T satisfies the equation x^{2} - x = 0, which factors as x(x - 1) = 0.

thus the minimal polynomial for T divides x^{2} - x, so is either:

a) x (in which case T is the 0-matrix), and thus T has only 0 eigenvalues.

b) x-1 (in which case T is the identity matrix), and thus T has only 1 eigenvalues,

c) x(x-1), in which case T has both 0 and 1 for eigenvalues.

the eigenspace corresponding to the eigenvalue 0 is called the null space (or kernel) of T. (for what is an eigenvector in this space? it is a non-zero vector v such that T(v) = 0v = 0).

the eigenspace corresponding to the eigenvalue 1 must (in this case) be the range of T (for if for a non-zero w, we have w = T(v), then T(w) = T(T(v)) = T^{2}(v) = T(v) = w, so w is an eigenvector with eigenvalue 1).

Re: Self-adjoint linear transformations and eigenvalues.

Quote:

Originally Posted by

**Deveno** if T^{2} = T, then T satisfies the equation x^{2} - x = 0, which factors as x(x - 1) = 0.

So does this mean Does this follow from the self-adjoint property?

Quote:

Originally Posted by

**Deveno** thus the minimal polynomial for T divides x2 - x

Hmm.. I'm not too sure what you mean by this. Would you mind enlightening me?

Thank you for your explanation though, I've got a better idea of what I'm supposed to be doing now.

Re: Self-adjoint linear transformations and eigenvalues.

if p(x) is a polynomial, with p(x) = a_{0} + a_{1}x + ....+ a_{n}x^{n},

and T:V→V is a linear transformation (in particular if T is an nxn matrix that takes the nx1 matrix v to the nx1 matrix Tv)

then if (a_{0}I + a_{1}T +....+ a_{n}T^{n})(v) = 0, for all v in V

we say that T satisfies p(x), or that p(T) = 0.

the monic polynomial of least degree m(x) with m(T) = 0 is called the minimal polynomial for T. it is not hard to show that if p(T) = 0, then m(x) is a factor of p(x).

the Cayley-Hamilton theorem says that T satisfies the polynomial det(T - xI) (or, in some texts, det(xI - T), the negative of the first determinant).

so the minimal polynomial for T, m(x), divides det(T - xI). in particular, every root of m(x) must be an eigenvalue of T.

the converse is also true: any eigenvalue of T is also a root of the minimal polynomial.

for suppose v is an eigenvector of T corresponding to the eigenvalue λ.

then m(T)(v) = m(λ)v, by the same reasoning we developed in the posts above:

(if m(x) = c_{0} + c_{1}x +....+ c_{k-1}x^{k-1} + x^{k},

then m(T)(v) = (c_{0}I + c_{1}T +....+ c_{k-1}T^{k-1} + T^{k})(v)

= c_{0}I(v) + c_{1}T(v) +....+ c_{k-1}T^{k-1}(v) + T^{k}(v)

= c_{0}v + c_{1}(λv) +....+ c_{k-1}(λ^{k-1}v) + λ^{k}v

= (c_{0} + c_{1}λ +....+ c_{k-1}λ^{k-1} + λ^{k})(v) = m(λ)v, as claimed)

but m(T) is the 0-map, by definition of the minimal polynomial, so m(λ)v = 0. since v is a non-zero vector (being an eigenvector),

m(λ) must be 0, that is, λ is a root of m(x).

thus the minimal polynomial tells us what all of the eigenvalues are (but perhaps not their multiplicities).

*******

your statement "does this mean T^{2}(x) = T(x^{2})?" is meaningless.

the "x" in the polynomial x^{2} - x isn't a "vector" it's an indeterminate (a "placeholder symbol" so that we can give a name to a polynomial). if you like, you can think of x as standing for a real (or complex) variable (although this is not quite accurate). we can't "square" vectors: in general, vector multiplication is undefined (we have the scalar multiplication:

scalar times vector = vector

and the inner product:

vector times vector = scalar

but we do not, in an arbitrary vector space, have a product:

vector times vector = vector).