1. ## Diagonalizable/Similar Matrices

I have a couple of Linear Algebra questions.

1.) Given a Matrix:

A = ([[5/6,1/3,0,-1/6],[1/3,11/42,3/14,4/21],[0,3/14,5/14,3/7],[-1/6,4/21,3/7,23/42]])

Where the numbers in each of the []'s is a row (the above matrix corresponding to a 4x4 matrix);

a.) Is the Matrix A diagonalizable? If it is, diagonlize A. If it is not, explain why it is not. Is A invertible? If it is, find A^(-1). If it is not, explain why it is not.

b.) By looking at the above example, we are easily able to cehck that A^(2) = A, and that A^(T) = A, even though A =/ (does not equal) I. Consider the general case of an n x n matrix, B, with B =/ I and B^(2) = B. Show that B is not invertible.

2.) Suppose Matrix A = ([[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n]]). Also, assume that the rows of Matrix A do not add up to 0. That is,

Summation(c_i =/ 0) from i=1...n.

Find all of the eigenvalues, their (algebraic) multiplicities, and as well as the dimensions of their eigenspaces.

2. Now, for my work:

1.) I know that in general, for k greater than 1, A^(k) = PD^(k)P^(-1)

A square matrix A is said to be diagonalizable if A is similar to a diagonal matrix, that is, if A = PDP^(-1) for some invertible matrix P and some diagonal matrix D.

An n x n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. In fact, A = PDP^(-1), with D a diagonal matrix, if and only if the columns of P are n linearly independent eigenvectors of A. In this case, the diagonal entries of D are eigenvalues of A that correspond, respectively to the eigenvectors of P.
In other words, A is diagonalizable if and only if there are enough eigenvectors to form a basis of R^(n). So, we want the geometric multiplicity to equal the algebraic multiplicity. So, how to find a diagonalizable matrix?

1.) Find the eigenvectors of A;
2.) Find n linearly independent eigenvectors of A;
3.) Construct P from the vectors in step 2;
4.) Construct D from the corresponding eigenvalues.

So I found the eigenvalues of the matrix A to be 1,1,0,0.

And the eigenvectors to be:

[0, 2, {Vector[Row](1..4,[]), Vector[Row](1..4,[])}], [1, 2, {Vector[Row](1..4,[]), Vector[Row](1..4,[])}] (maple notation), which im a little confused on this notation that it gives back.

Any way, I made my new matrix P = [[1, (-5)/2, 3/2, 0], [0, 1/2, (-3)/2, 1], [6, 3, 1, 0], [-5, -2, 0, 1]] since those are the eigenvectors.

D will be the eigenvalues down the diagonal: [[1, (-5)/2, 3/2, 0], [0, 1/2, (-3)/2, 1], [6, 3, 1, 0], [-5, -2, 0, 1]]

But since order is now important, that is, the order of eigenvalues has to match order chosen for columns of P, I have to be careful. This is where I get stuck.

Part b? No idea.

And for #2 im not sure where to start.

3. Originally Posted by fifthrapiers

b.) By looking at the above example, we are easily able to cehck that A^(2) = A, and that A^(T) = A, even though A =/ (does not equal) I. Consider the general case of an n x n matrix, B, with B =/ I and B^(2) = B. Show that B is not invertible.
Let $\displaystyle B\ne I$ be a square matrix such that:

$\displaystyle B^2=B$

Now suppose that $\displaystyle B$ has inverse $\displaystyle B^{-1}$ then:

$\displaystyle B^2 B^{-1}=B B^{-1}$

or regrouping:

$\displaystyle B(BB^{-1})=B=B B^{-1}=I$

so:

$\displaystyle B=I$

A contradiction, hence any matrix $\displaystyle B\ne I$ that satisfies $\displaystyle B^2=B$ cannot be invertible.

RonL

4. Originally Posted by fifthrapiers
I have a couple of Linear Algebra questions.

1.) Given a Matrix:

A = ([[5/6,1/3,0,-1/6],[1/3,11/42,3/14,4/21],[0,3/14,5/14,3/7],[-1/6,4/21,3/7,23/42]])

Where the numbers in each of the []'s is a row (the above matrix corresponding to a 4x4 matrix);

a.) Is the Matrix A diagonalizable? If it is, diagonlize A. If it is not, explain why it is not. Is A invertible? If it is, find A^(-1). If it is not, explain why it is not.
You have shown that the eigen values of $\displaystyle A$ are $\displaystyle 1,\ 1,\ 0,\ 0$, and found a
set of 4 (linearly independent) eigen vectors. So if $\displaystyle B$ is the matrix whos
columns are these eigen vectors we have:

$\displaystyle B^{-1} A B$ is diagonal with the eigen values down the diagonal.

But two of these values on the diagonal are zero, so this matrix is not
invertible and so $\displaystyle A$ is not invertible.

RonL

5. Originally Posted by fifthrapiers
2.) Suppose Matrix A = ([[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n]]). Also, assume that the rows of Matrix A do not add up to 0. That is,

Summation(c_i =/ 0) from i=1...n.

Find all of the eigenvalues, their (algebraic) multiplicities, and as well as the dimensions of their eigenspaces.
Lets assume that A is a square matrix. Given the conditions defining it,
it is obvious that the rank of A is 1. The rank theorem tells us that the dimension
of the null-space of A is n-1.

Thus A has one non-zero eigen value, and 0 is an eigen value of multiplicity
(n-1). An obvious eigen vector is (1, 1, .., 1)', and its eigen value is
c=sum(i=1..n) c_i.

So we have

Code:
e-value         multiplicity     Dim of E-space

c                   1                    1
0                  (n-1)               (n-1)

6. Ahh, I was making part a harder than it is. So if the eigenvalues in the diagonal = 0, it is not invertible. Now it makes sense, because the product of the eigenvalues is equal to the determinant, and therefore if the determinant is = 0, the matrix cannot be invertible.

Thanks, CaptainBlack.

7. Originally Posted by CaptainBlack
Lets assume that A is a square matrix. Given the conditions defining it,
it is obvious that the rank of A is 1. The rank theorem tells us that the dimension
of the null-space of A is n-1.

Thus A has one non-zero eigen value, and 0 is an eigen value of multiplicity
(n-1). An obvious eigen vector is (1, 1, .., 1)', and its eigen value is
c=sum(i=1..n) c_i.

So we have

Code:
e-value         multiplicity     Dim of E-space

c                   1                    1
0                  (n-1)               (n-1)
We haven't learned about "rank of A" yet or "null-space". Is there a way to prove this without using the above?

And, I am assuming Dim of E-space is the dimensions of the eigenspaces (geometric multiplicities, I think).

8. [quote=fifthrapiers;26229]We haven't learned about "rank of A" yet or "null-space". Is there a way to prove this without using the above?[\quote]

I'll have a think about another method, at the moment other methods look
labourious but I will see what I can find.

And, I am assuming Dim of E-space is the dimensions of the eigenspaces (geometric multiplicities, I think).
The null space is the space of all vectors X for which AX=0, which is
clearly the same thing as the eigen space of A associated with the
eigen value 0.

RonL

9. Laborious works, if it's what I have learned. We have covered pretty much everything on determinants, eigenvectors, eigenvalues, char. equation/polynomial, some stuff on Markov chains, and the properties of invertible matrices (singular).

10. Originally Posted by fifthrapiers

2.) Suppose Matrix A = ([[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n],[c_1, c_2, ..., c_n]]). Also, assume that the rows of Matrix A do not add up to 0. That is,

Summation(c_i =/ 0) from i=1...n.

Find all of the eigenvalues, their (algebraic) multiplicities, and as well as the dimensions of their eigenspaces.
I think this still needs some polishing, but here it is anyway. The basic idea is the same as the previous solution for this problem; show the obvious eigen value has multiplicity 1, and that the only other eigen value is 0, which has multiplicity n-1.

For any $\displaystyle X \in \mathbb{R}^n$ we find that for some $\displaystyle \kappa_X \in \mathbb{R}$ $\displaystyle AX=\kappa_X O$, where $\displaystyle O$ is the vector all of whose elements are $\displaystyle 1$, which can be checked by multiplying the expression out.

Now lets write $\displaystyle X=\frac{\kappa_X}{c} O+Y$, then as $\displaystyle AO=cO$:

$\displaystyle AX = \frac{c \kappa_X}{c} O+AY=\kappa_N O+AY$

Hence $\displaystyle AY=0$.

So we conclude that $\displaystyle \mathbb{R}^n$ can be written as the sum of the space spanned by $\displaystyle O$ and the null space of $\displaystyle A$, and so the null space of $\displaystyle A$ has dimension $\displaystyle n-1$.

From this we conclude that the dimension of the eigen space corresponding to the eigen value $\displaystyle c$ is $\displaystyle 1$, and so the multiplicity of the eigen space is 1. Also the eigen space corresponding to the eigen value $\displaystyle 0$ is the null space of $\displaystyle A$ and so of dimension $\displaystyle n-1$ so the multiplicity of $\displaystyle 0$ is $\displaystyle n-1$.

RonL

11. What does it mean for a matrix to have "rank 1"? Is it has something to do with the linearly independent rows/columns, or something to that effect?

12. Originally Posted by fifthrapiers
What does it mean for a matrix to have "rank 1"? Is it has something to do with the linearly independent rows/columns, or something to that effect?
It means the the number of linearly independent rows or columns is 1.

So for a square nxn matrix the minimum rank is 1, and the maximum rank is
n.

RonL