Now, for my work:

1.) I know that in general, for k greater than 1, A^(k) = PD^(k)P^(-1)

A square matrix A is said to be diagonalizable if A is similar to a diagonal matrix, that is, if A = PDP^(-1) for some invertible matrix P and some diagonal matrix D.

An n x n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. In fact, A = PDP^(-1), with D a diagonal matrix, if and only if the columns of P are n linearly independent eigenvectors of A. In this case, the diagonal entries of D are eigenvalues of A that correspond, respectively to the eigenvectors of P.

In other words, A is diagonalizable if and only if there are enough eigenvectors to form a basis of R^(n). So, we want the geometric multiplicity to equal the algebraic multiplicity. So, how to find a diagonalizable matrix?

1.) Find the eigenvectors of A;

2.) Find n linearly independent eigenvectors of A;

3.) Construct P from the vectors in step 2;

4.) Construct D from the corresponding eigenvalues.

So I found the eigenvalues of the matrix A to be 1,1,0,0.

And the eigenvectors to be:

[0, 2, {Vector[Row](1..4,[]), Vector[Row](1..4,[])}], [1, 2, {Vector[Row](1..4,[]), Vector[Row](1..4,[])}] (maple notation), which im a little confused on this notation that it gives back.

Any way, I made my new matrix P = [[1, (-5)/2, 3/2, 0], [0, 1/2, (-3)/2, 1], [6, 3, 1, 0], [-5, -2, 0, 1]] since those are the eigenvectors.

D will be the eigenvalues down the diagonal: [[1, (-5)/2, 3/2, 0], [0, 1/2, (-3)/2, 1], [6, 3, 1, 0], [-5, -2, 0, 1]]

But since order is now important, that is, the order of eigenvalues has to match order chosen for columns of P, I have to be careful. This is where I get stuck.

Part b? No idea.

And for #2 im not sure where to start.