1. ## markov chain

There are four adjacent squares and a marker. A player begins a game by placing a marker in square#2. A die is rolled and the marker is moved one square to the left if a 1 or 2 is rolled and one square to the right if a 3, 4, 5, or 6 is rolled. The process is continued until the marker lands in square#1 (win) or square#4 (lose). What is the probability of winning?

Hint: instead of diagonalizing the appropriate transition matrix, A, it is easier to represent e2 as a linear combination of the eigenvectors of A and then apply $\displaystyle A^n$ to the result.

I know how to answer this the "harder" way by diagonalizing A but I don't understand what the hint is saying. I assume e2 is the second vector in the standard ordered basis for 4x1 matrices but how do I "apply $\displaystyle A^n$ to the result" and why would I want to??

2. 1. If A is diagonalizable, then the column vectors of the diagonalizing matrix X are eigen vectors of A and the diagonal elements D are corresponding eigenvalues of A
2. The diagonalizing matrix X is not unique.
3. If A is nxn and A has n distinct eigenvalues, then A is diagonalizable. If not distinct, then A may or may not be diagonalizable, depending on whether A has n lin. ind. eigenvectors.
4. If A is diagonalizable, then A can be factored into a product $\displaystyle XDX^-1$.

To obtain eigenvectors, you need to first get the eigenvalues.

$\displaystyle det(A-\lambda I)=0$

If the eigenvalues are used to form a diagonalizing matrix X, then $\displaystyle A^k=XD^kX^{-1}$

$\displaystyle A^k=X\begin{bmatrix} (\lambda)_{1}^k & & & \\ & (\lambda)_{2}^k & & \\ & & \ddots & \\ & & & (\lambda)_{n}^k \end{bmatrix}X^{-1}$

A converges to a steady state vector and x is a probability vector.

3. Originally Posted by dwsmith
1. If A is diagonalizable, then the column vectors of the diagonalizing matrix X are eigen vectors of A and the diagonal elements D are corresponding eigenvalues of A
2. The diagonalizing matrix X is not unique.
3. If A is nxn and A has n distinct eigenvalues, then A is diagonalizable. If not distinct, then A may or may not be diagonalizable, depending on whether A has n lin. ind. eigenvectors.
4. If A is diagonalizable, then A can be factored into a product $\displaystyle XDX^-1$.

To obtain eigenvectors, you need to first get the eigenvalues.

$\displaystyle det(A-\lambda I)=0$

If the eigenvalues are used to form a diagonalizing matrix X, then $\displaystyle A^k=XD^kX^{-1}$

$\displaystyle A^k=X\begin{bmatrix} (\lambda)_{1}^k & & & \\ & (\lambda)_{2}^k & & \\ & & \ddots & \\ & & & (\lambda)_{n}^k \end{bmatrix}X^{-1}$

A converges to a steady state vector and x is a probability vector.

Thanks for the reply. I agree, this is the way I would normally solve the problem--by diagonalizing A. But the hint says "rather than diagonalizing the appropriate transition matrix, A, it is easier to represent e2 as a linear combination of the eigenvectors of A and then apply $\displaystyle A^n$ to the result" so I would be representing e2 as a linear combination of the columns of the diagonalizing matrix (X, in your notation)....I'm still not sure how the hint applies.

4. Have you found the eigenvectors of A yet?

5. Originally Posted by dwsmith
Have you found the eigenvectors of A yet?

I got characteristic polynomial $\displaystyle f(t)=(t^2)(t-\frac{\sqrt2}3)(t+\frac{\sqrt2}3)$

($\displaystyle \frac12$, $\displaystyle \frac1{\sqrt2}$, $\displaystyle 1, \sqrt2$) is an eigenvector that is a basis for the eigenspace of $\displaystyle \lambda$= $\displaystyle \frac{\sqrt2}3$

($\displaystyle \frac12$, $\displaystyle \frac{-1}{\sqrt2}$, $\displaystyle 1, -\sqrt2$) is an eigenvector that is a basis for the eigenspace of $\displaystyle \lambda$= $\displaystyle \frac{-\sqrt2}3$

but when I tried to find an eigenvector, $\displaystyle v$, to form a basis for the eigenspace corresponding to $\displaystyle \lambda =0$ I only got v=(0, 0, 0, 0) which cannot be an eigenvector and furthermore cannot be in a basis.... did I do something wrong or am I missing something?

6. Originally Posted by 234578
I got characteristic polynomial $\displaystyle f(t)=(t^2)(t-\frac{\sqrt2}3)(t+\frac{\sqrt2}3)$

($\displaystyle \frac12$, $\displaystyle \frac1{\sqrt2}$, $\displaystyle 1, \sqrt2$) is an eigenvector that is a basis for the eigenspace of $\displaystyle \lambda$= $\displaystyle \frac{\sqrt2}3$

($\displaystyle \frac12$, $\displaystyle \frac{-1}{\sqrt2}$, $\displaystyle 1, -\sqrt2$) is an eigenvector that is a basis for the eigenspace of $\displaystyle \lambda$= $\displaystyle \frac{-\sqrt2}3$

but when I tried to find an eigenvector, $\displaystyle v$, to form a basis for the eigenspace corresponding to $\displaystyle \lambda =0$ I only got v=(0, 0, 0, 0) which cannot be an eigenvector and furthermore cannot be in a basis.... did I do something wrong or am I missing something?
Can you post your determinant matrix you used?

7. Originally Posted by dwsmith
Can you post your determinant matrix you used?

1st row of A: 0, 1/3, 0, 0
2nd row of A: 0, 0, 1/3, 0
3rd row of A: 0, 2/3, 0, 0
4th row of A: 0, 0, 2/3, 0

I found det(A-tI) to find characteristic polynomial.
Once I determined $\displaystyle \lambda=0$ , $\displaystyle \lambda=\frac{\sqrt2}3$ and$\displaystyle \lambda=\frac{-\sqrt2}3$
I solved (A-$\displaystyle \lambda$)v=0 for v to find bases for the eigenspaces

8. Your eigenspace for lambda equals zero is:
$\displaystyle x_1\begin{bmatrix} 1\\ 0\\ 0\\ 0 \end{bmatrix}+x_4\begin{bmatrix} 0\\ 0\\ 0\\ 1 \end{bmatrix}$ Therefore, you have four eigenvectors now. Sorry for the slow reply was busy.