# Self-adjoint linear transformations and eigenvalues.

• Jun 9th 2012, 02:12 AM
anguished
Hi all, I was studying for a Math test until this question stumped me:

Let $V$ be an inner product space and $T : V \rightarrow V$ be a self-adjoint linear transformation such that $T^{2} = T$.

a) Show that all eigenvalues of $T$ are either 0 or 1.

b) Describe the eigenspaces of $T$ in terms of the kernel of $T$, the range of $T$ and $V$.

So for question a) I know that a self-adjoint linear transformation means $ = \forall u, v \in V$ and $T(u) = \lambda u$ where $\lambda$ is a scalar but I don't know how to use these to solve the question...

As for b) I know that Nullity T + Rank T = Dim V which is equivalent to dim(ker T) + dim(im T) = dim V ...but I guess I can't solve this until I know how to do question a)...

Any help would be greatly appreciated.
• Jun 9th 2012, 03:02 AM
Deveno
Re: Self-adjoint linear transformations and eigenvalues.
if T2 = T, then T satisfies the equation x2 - x = 0, which factors as x(x - 1) = 0.

thus the minimal polynomial for T divides x2 - x, so is either:

a) x (in which case T is the 0-matrix), and thus T has only 0 eigenvalues.

b) x-1 (in which case T is the identity matrix), and thus T has only 1 eigenvalues,

c) x(x-1), in which case T has both 0 and 1 for eigenvalues.

the eigenspace corresponding to the eigenvalue 0 is called the null space (or kernel) of T. (for what is an eigenvector in this space? it is a non-zero vector v such that T(v) = 0v = 0).

the eigenspace corresponding to the eigenvalue 1 must (in this case) be the range of T (for if for a non-zero w, we have w = T(v), then T(w) = T(T(v)) = T2(v) = T(v) = w, so w is an eigenvector with eigenvalue 1).
• Jun 9th 2012, 03:53 AM
anguished
Re: Self-adjoint linear transformations and eigenvalues.
Quote:

Originally Posted by Deveno
if T2 = T, then T satisfies the equation x2 - x = 0, which factors as x(x - 1) = 0.

So does this mean $T^{2}(x) = T(x^{2})?$ Does this follow from the self-adjoint property?

Quote:

Originally Posted by Deveno
thus the minimal polynomial for T divides x2 - x

Hmm.. I'm not too sure what you mean by this. Would you mind enlightening me?

Thank you for your explanation though, I've got a better idea of what I'm supposed to be doing now.
• Jun 9th 2012, 07:33 AM
Deveno
Re: Self-adjoint linear transformations and eigenvalues.
if p(x) is a polynomial, with p(x) = a0 + a1x + ....+ anxn,

and T:V→V is a linear transformation (in particular if T is an nxn matrix that takes the nx1 matrix v to the nx1 matrix Tv)

then if (a0I + a1T +....+ anTn)(v) = 0, for all v in V

we say that T satisfies p(x), or that p(T) = 0.

the monic polynomial of least degree m(x) with m(T) = 0 is called the minimal polynomial for T. it is not hard to show that if p(T) = 0, then m(x) is a factor of p(x).

the Cayley-Hamilton theorem says that T satisfies the polynomial det(T - xI) (or, in some texts, det(xI - T), the negative of the first determinant).

so the minimal polynomial for T, m(x), divides det(T - xI). in particular, every root of m(x) must be an eigenvalue of T.

the converse is also true: any eigenvalue of T is also a root of the minimal polynomial.

for suppose v is an eigenvector of T corresponding to the eigenvalue λ.

then m(T)(v) = m(λ)v, by the same reasoning we developed in the posts above:

(if m(x) = c0 + c1x +....+ ck-1xk-1 + xk,

then m(T)(v) = (c0I + c1T +....+ ck-1Tk-1 + Tk)(v)

= c0I(v) + c1T(v) +....+ ck-1Tk-1(v) + Tk(v)

= c0v + c1(λv) +....+ ck-1k-1v) + λkv

= (c0 + c1λ +....+ ck-1λk-1 + λk)(v) = m(λ)v, as claimed)

but m(T) is the 0-map, by definition of the minimal polynomial, so m(λ)v = 0. since v is a non-zero vector (being an eigenvector),

m(λ) must be 0, that is, λ is a root of m(x).

thus the minimal polynomial tells us what all of the eigenvalues are (but perhaps not their multiplicities).

*******

your statement "does this mean T2(x) = T(x2)?" is meaningless.

the "x" in the polynomial x2 - x isn't a "vector" it's an indeterminate (a "placeholder symbol" so that we can give a name to a polynomial). if you like, you can think of x as standing for a real (or complex) variable (although this is not quite accurate). we can't "square" vectors: in general, vector multiplication is undefined (we have the scalar multiplication:

scalar times vector = vector

and the inner product:

vector times vector = scalar

but we do not, in an arbitrary vector space, have a product:

vector times vector = vector).