I attached the 3 problems i'm having trouble with. They're easier to see in the attachment rather than if i typed them.
Can someone please explain these to me step by step or guide me through them? I'm not even sure where to begin.
Thank you
A_4 is the matrix
[.1 .6
.9 .4]
For the spectrum of self-adjoint matrices, you can proceed as follows:
Assume (This is the more common notation for Hilbert adjoint.) Further assume that for some nonzero vector . Then
Now examine the inner product However, it's also true that by adjointness. Hence,
Can you see where to go from here?
Ok. I've just shown that and we know that So what can you do with this equation?
By the way, is a standard notation in functional analysis for the spectrum of the operator . If is a finite dimensional matrix, then The middle term there is called the point spectrum, and the last set there is just the eigenvalues. The eigenvalues, by definition, are equal to the point spectrum, and that's true of any operator.
I hoestly still don't understand the spectrum one. I just don't see where i'm supposed to be going with it. We were never told what sigma(A) means or what its called so I couldn't find it in the book, thank you for explaing that.
As for the first problem, I diagonalized it for another problem and got
[1 0
0 -.5]
The problem before the first one told me that q = [2/5 3/5]^T for A_4. I don't see the limit going to that.
About the spectrum problem:
You can divide out by the and you're left with the equation What does that tell you?
About the Limits of Time Series problem: when you diagonalized , you found an invertible matrix such that where is the diagonal matrix.
So, proving that
is the same as proving that
About the Connection to Transposition problem: are there any assumptions about the size of ?
Doesn't that mean that lambda is a constant, a real constant since its conjugate is the same?
I'll try that. So can I just take diagonal entries of the matrix to the nth power? Also, I don't see that going to the q from the problem before this one.
I think its a square matrix. I thought about this one, since det(A) = det(A^T) you can relate that to finding eigenvalues, which means the eigenvalues would be the same. So there would be n-many eigenvectors correct? Does that logic make sense and prove it?
Well, eigenvalues are assumed to be constants already. But yes, if a number is equal to its complex conjugate, then it's real. This means your'e done with that problem: you can now show that if a number is an eigenvalue, then it's real. That proves the set inclusion property you were asked to show.Doesn't that mean that lambda is a constant, a real constant since its conjugate is the same?
You tell me whether the nth power of a diagonal matrix can be computed by taking the nth power of the numbers on the diagonal. Hint: try squaring a diagonal matrix. You'll see what happens. Once you square it, try cubing it. Etc. Incidentally, I wouldn't recommend taking the limit of the matrix and then computing the LHS. I think the diagonalization will allow you to compute everything on the LHS of the equation that is to the right of the limit sign, and then you'd take the limit. The result should be a column vector. Show me what you have, and we'll see where that goes.I'll try that. So can I just take diagonal entries of the matrix to the nth power? Also, I don't see that going to the q from the problem before this one.
The eigenvalues would be the same, I agree.I think its a square matrix. I thought about this one, since det(A) = det(A^T) you can relate that to finding eigenvalues, which means the eigenvalues would be the same.
I think it likely. But it needs a proof. The eigenvectors of a matrix would not be, I think, the same as the eigenvectors of its transpose. Therefore, it's not inherently obvious, at least to me, that there would be the same number of linearly independent eigenvectors.So there would be n-many eigenvectors correct? Does that logic make sense and prove it?
I asked if you knew what the size of was. Of course it's square, or the whole eigenvalue process would be undefined. I'm wondering if it's n x n or not. Because if it is, there might be a very nice way of relating the eigenvectors of to those of
Oh ok, that makes alittle more sense. I'm still a little confused about when you said How do you go from the 2nd part to the 3rd? Is it because lambda is just a constant?
I tried multiplying diagonal matricies and it is te nth power on the diagonal. So I got
P = P^(-1) =
[ 2 -1 [ 1 1
3 1] -3 2]
When I compute P * D^n * P^(-1)
I get
[.5^n 3^n
4.5^n 2^n]
Am I on the right track?
I think it is n x n. I'm not sure how to relate the eigenvectors, but I understand the eigenvalue part.
This is one of the axioms of inner products. In physics, at least, we assume that the inner product is linear in the second term. That is, and for all scalars (yes, constants) and vectors , , and With the inner product having conjugate symmetry, you can show that the inner product is conjugate linear in the first term. That's where the complex conjugate of came from.How do you go from the 2nd part to the 3rd?
Moving on to the second problem, I agree with your , but not with your . You need to multiply the matrix you have by 1/5 to get the correct inverse. I also think your needs a little more careful work. Each entry in the matrix there is going to be the sum of two different elements to the nth power. Incidentally, the q you mentioned in post # 10 is correct.
About the third problem. See here for a very interesting discussion of left and right eigenvectors. They have much to do with the transpose matrix. You might find either what you need, or an idea. You might try playing around with determinants, perhaps of Equation (18).