# (Leslie) Matrix

• Jun 7th 2007, 09:25 AM
AfterShock
(Leslie) Matrix
I'm having difficulty trying to prove this theorem for my REU research.

I won't go into the full details of the thing I'm trying to prove as it's quite complicated. However, I am trying to solve the inverse of the following n x n matrix:

Code:

1 0 0 0 0 ... -s_1 1 . . . . . . . . .  1 . . . . . . . 0 0 . . . . . . 0 0 -S_n (1-S_n)
Yeah, so I am assuming the above won't come out correctly.

Any way, I want to find the inverse, and therefore the easiest thing to do would be to augment it with the identity n x n matrix.

What the above is supposed to be is (I - T), and thus we have 1's along the diagonal where the last term is (1 - S_n) and -s_1, -s_2, ..., -s_n along the sub-diagonal.

The reason for doing so is it helps me find R_0, the largest positive eigenvalue later.

As I see it, the general pattern is:

1's along the main diagonal, with s_1, s_1*s_2, s_1*s_2*s_3, ... along the sub-diagonal. The issue comes trying to determine the last elements in the matrix.

And then, perhaps the hardest part, would be trying to give a proof of why this is true. Using inducation, I would assume, would be extremely tedious and messy.
• Jun 7th 2007, 09:05 PM
JakeD
Quote:

Originally Posted by AfterShock
I'm having difficulty trying to prove this theorem for my REU research.

I won't go into the full details of the thing I'm trying to prove as it's quite complicated. However, I am trying to solve the inverse of the following n x n matrix:

Code:

1 0 0 0 0 ... -s_1 1 . . . . . . . . .  1 . . . . . . . 0 0 . . . . . . 0 0 -S_n (1-S_n)
Yeah, so I am assuming the above won't come out correctly.

Any way, I want to find the inverse, and therefore the easiest thing to do would be to augment it with the identity n x n matrix.

What the above is supposed to be is (I - T), and thus we have 1's along the diagonal where the last term is (1 - S_n) and -s_1, -s_2, ..., -s_n along the sub-diagonal.

The reason for doing so is it helps me find R_0, the largest positive eigenvalue later.

As I see it, the general pattern is:

1's along the main diagonal, with s_1, s_1*s_2, s_1*s_2*s_3, ... along the sub-diagonal. The issue comes trying to determine the last elements in the matrix.

And then, perhaps the hardest part, would be trying to give a proof of why this is true. Using inducation, I would assume, would be extremely tedious and messy.

Your description of the inverse doesn't match what I get through row reducing the augmented matrix:

$A = \begin{bmatrix}
1 & 0 & 0 & 0 \\
-s_1 & 1 & 0 & 0 \\
0 & -s_2 & 1 & 0 \\
0 & 0 & -s_3 & 1- s_4 \\
\end{bmatrix}$

$A^{-1} = \begin{bmatrix}
1 & 0 & 0 & 0 \\
s_1 & 1 & 0 & 0 \\
s_1 s_2 & s_2 & 1 & 0 \\
s_1 s_2 s_3/(1 - s_4)& s_2 s_3 /(1 - s_4)& s_3 /(1 - s_4)& 1/(1-s_4) \\
\end{bmatrix}$

I don't think you'd have to prove this is the inverse. Just display a specific example like this and let the reader check that it works. It is pretty straightforward to check.