I have a square matrix A, that is nilpotent. I want to show that the eigenvalue, say L = 0.
I'm not sure how to approach this problem because I have used the definition of an eigenvalue and eigenvector of A corresponding to L:
AX = LX
But I cannot use the fact that A is nilpotent because I don't know how to prove that (A^m)X = (L^m)X. Can anybody help with this?
AH! Thank you very much! I was scratching my head wondering how lambda(AX) = [lambda^2]X but it was the initial identity we had all along.
I'm not sure if I should make another thread for this, but can you help me on a follow-up question? I want to show that the characteristic polynomial is x^n as well, but I'm not sure where to go with det(xI - A) = 0.
Av = λv for some non-zero vector v, means that:
A2v = A(Av) = A(λv) = λ(Av) = λ(λv) = λ2v.
proof by induction on n that for an eigenvector v of A with eigenvalue λ, An(v) = λnv:
for n = 1, we have Av = λv, which is true because λ is an eigenvalue belonging to the eigenvector v.
suppose that for n = k, Ak = λkv.
then Ak+1(v) = A(Akv) = A(λkv) (by our induction hypothesis)
= λk(Av) = λk(λv) = λk+1v, which is the desired result for n = k+1.
therefore, we have that An(v) = λnv for all positive natural numbers n.
now.. if A is nilpotent, there is some m with Am = 0.
so if λ is an eigenvalue for A, with eigenvector v, we have:
0 = 0v = Amv = λmv, and since v is non-zero (being an eigenvector) λm = 0.
that is: λ is a root of xm = (x - 0)m, which has the sole root 0 of (algebraic) multiplicity m.
hence λ = 0, a nilpotent matrix can only have 0 eigenvalues.
one might ask: how do we know A has any eigenvectors at all?
it could be that A = 0, the 0-matrix. this is certainly nilpotent, and EVERY non-zero vector v is an eigenvector for A, with eigenvalue 0.
otherwise, A ≠ 0, so there is some positive integer r with:
Ar ≠ 0, Ar+1 = 0.
since Ar ≠ 0, there exists some vector u ≠ 0, with Aru ≠ 0.
let v = Aru. this is a non-zero vector, and:
Av = A(Aru) = Ar+1u = 0u = 0 = 0v, so we see v is an eigenvector (with eigenvalue 0).
thus we are fully justified in assuming A has (at least one) eigenvector(s), which then have associated eigenvalues (which we proved are all 0).
EDIT: for your second question, note if m is the SMALLEST positive integer with Am = 0, then the minimal polynomial for A m(x) divides xm.
thus m(x) = xt, for some positive integer t ≤ m (note t ≠ 0, since the minimal polynomial of A is monic, and thus of degree ≥ 1).
but we have for all 0 < t < m, At ≠ 0 (since m is the smallest such positive integer with this property). hence t = m.
since by Cayley-Hamilton m(x)|p(x), the characteristic polynomial of A, we have p(x) = xmq(x).
but the roots of q(x) are roots of p(x) as well, and every root of p(x) is 0 (from what we did above). hence q(x) = xn-m, where n is the dimension of dom(A).