1. Markov Chain

A question which I shouldn't really be finding to be as difficult as I am.

I suspect my answer to part i to be correct. Part ii confuses me as the chain we've been given isn't Ergodic however I'm guessing the question is just a general one and not aimed at the chain? Still, I'm not quite sure what to say. And for part iii, I don't know what method to use.

Thanks.

2. You can't get to state 4 from any state (let alone from every other state). So the chain is obviously not ergodic. I don't know what they're talking about.

For part (iii) I would set up the transition matrix and then set up the steady-state equations.

Usually setting up the transition matrix is the hardest part. In this case it is pretty simple because the transition probabilites are given to you.

$\left(\begin{array}{ccccc}0&0.5&0.5&0&0\\0.7&0&0.3 &0&0\\1&0&0&0&0\\0.6&0&0&0&0.4\\0&0.4&0.6&0&0\end{ array}\right)$

Let $\pi_{i}$ be the long-run proportion of time that X spends in state i

$\pi_{1}=0.7\pi_{2}+\pi_{3}+0.6\pi_{4}$
$\pi_{2}=0.5\pi_{1}+0.4\pi_{5}$
$\pi_{3}=0.5\pi_{1}+0.3\pi_{2}+0.6\pi_{5}$
$\pi_{4}=0$
$\pi_{5}=0.4\pi_{4}$

$\pi_{1}+\pi_{2}+\pi_{3}+\pi_{4}+\pi_{5}=1$

now solve for $\pi_{1}$ (which shouldn't be too bad because $\pi_{4}=\pi_{5}=0$ )

3. Thanks a lot, you've been of great help

PS. Would obviously still encourage some more opinions on part ii.

4. Sorry made an error on the image, the communicating classes are {1,2,3},{4},{5}, right?

Image Fixed.

5. Originally Posted by ZTM1989
Sorry made an error on the image, the communicating classes are {1,2,3},{4},{5}, right?

Image Fixed.
I always get confused by classes, but it seems right to me.

You can go from state 1 to state 2 to state 3 and then back to state 1. So {1,2,3} is a communicating class. And it's recurrent because the process will keep re-entering each state over and over again.

State 4 doesn't communicate with any other state. So it's own class. And it's transient because there is a postive probability that the process. starting in state 4, will never enter state 4 again. (The probability is, of course, 1).

Same argument for state 5.