Wolfram Mathematica confirms your result and rejects the "apparent" answer. I, too, cannot see what's wrong with your steps.
Wasn't sure where to put this exactly. It's in a question to do with diagonalising matrices (using the matrix of eigenvectors and what not), and I have the solution in front of me, but I'm stuck on how they get from one step to another. I understand the solution up to here...
...which I simplified to...
Now, the answer is apparently , but no matter how I manipulate that above, I can't get this answer.
Any ideas where I'm going wrong / what I'm missing?
my suspicion is that you have left out a minor detail in this question. Were you instructed to orthogonally diagonalize the matrix? It appears to me that that is precisely what has been done.
Also here is how you go from your last step to their answer (minus the making sure it has unit norm).
Notice we are just multiplying by a carefully chosen 1 here in both terms (to get rid of the imaginary stuff in the denominator.
Now all you have to do is scale both things on the diagonal so that they have norm 1. So we notice that:
This is why I believe they have supplied the answer that they did, to make sure the entries had norm 1. This is an important step to remember in the process of orthogonal diagonalization. Hope I didn't catch ya too late.
When you diagonalise a matrix, i.e. find matrices P,D such that A=P^-1*D*P and D is diagonal the trace and determinant of D and A must be the same before and afterwards.
For 2*2 matrices checking the trace is an easy and good check to perform.
I've not heard the term orthogonal diagonalization before, and a quick Google search suggests that it it is a diagonalization where P is an orthogonal matrix.
That doesn't seem to match up with what Gamma has posted.
I also don't understand what changing a diagonal matrix so all its entries have norm 1 is
supposed to achieve since the final matrix is usually not then similar to the starting matrix. In fact It doesn't really have anything in common with it. Also since the matrix has complex entries wouldn't you look for a diagonlization using unitary rather than orthogonal matrices?
OK so in fact we have
That is the matrix of a 45 degree rotation. Therefore it clearly has complex eigenvalues that are conjugates of each other, and since the determinant is 1 they both have norm 1. So R is similar to the complex matrix
which if you express it using ordinary complex numbers is
I know I have shown no working for this, but every rotation matrix has an analogous diagonal form to this one.
The fact I got the inverse matrix from the answer you have is irrelevant - matrices with the same trace and determinant are all similar, so I am just going to get a different G from you, or you can swap the eigenvalues around.
Now you know the correct eigenvalues it should be routine to calculate the eigenvectors and hence G. I can't do that in my head though!
This isn't really helping me with my initial problem, though. I just didn't understand how to get the answer that's given in the solution. The solution tells me how to get the eigenvalues and eigenvectors, which are...
with eigenvector , and wth eigenvector
It then tells me that "an invertible matrix which diagonalises a matrix is given by a matrix of eigenvectors", giving me a of , therefore
My problem is that the I get is different to the one given in the solution here.
Also, something else I don't get in the given solution (there's probably an easier explanation to this) is this...
See how the swapped to a from the first line to the 2nd? Why is that?