Results 1 to 2 of 2

Math Help - Markov Chains - Recurrence

  1. #1
    Junior Member
    Joined
    Mar 2009
    Posts
    64

    Markov Chains - Recurrence

    Hi, I was reading about Markov chains in wikipedia and I've got a doubt on this topic: Markov chain - Wikipedia, the free encyclopedia

    More specifically, I wanted examples of transient and null recurrent states.

    If I have a chain with 3 states that communicate with each other. I can start at state 1, and do something like 1->2->3->2->3->2->3->2->3... etc. Does this mean P(T1 = infinity) > 0? I don't think so, since calculating this over infinite time steps would yield a 0 probability, am I correct?

    But the same chain with a 4th state that can only be accessed by state 1 and only accesses itself would make state 1 a transient state, right?

    What about null recurrent states?
    I can imagine a very simple case with 2 states, where state 1 goes to state 2 with probability 1 and state 2 remains at state 2 with probability P and goes to state 1 with probability 1-P, 0<P<1. It obviously has P(T1 = infinity) = 0, but E[T1] = 1<n<infinity : Sum(n*((1-P)*(P)^(n-2))) which seems to converge for any P < 1...

    So, could anyone give me an example of a chain with null recurrent states? Thanks
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by pedrosorio View Post
    Hi, I was reading about Markov chains in wikipedia and I've got a doubt on this topic: Markov chain - Wikipedia, the free encyclopedia

    More specifically, I wanted examples of transient and null recurrent states.

    If I have a chain with 3 states that communicate with each other. I can start at state 1, and do something like 1->2->3->2->3->2->3->2->3... etc. Does this mean P(T1 = infinity) > 0? I don't think so, since calculating this over infinite time steps would yield a 0 probability, am I correct?
    You have P(T_1=\infty)=0 indeed. Since p_{21}>0, if the state 2 is visited infinitely often, the Markov chain will also visit the state 1 infinitely often (if an event has positive probability and you procede to infinitely many independent trials, it will occur infinitely often).

    With this argument, you can see that as soon as two states communicate with each other (i.e. there is a path with positive probability from one to another), if one is recurrent, then so is the other one. Thus you can gather the states into "recurrent classes", i.e. classes where the states communicate with each other. This helps to study the recurrence/transience since all states in the same class share the same status.

    But the same chain with a 4th state that can only be accessed by state 1 and only accesses itself would make state 1 a transient state, right?
    Right. 1 is transient, and so are 2 and 3 (they communicate with each other).


    What about null recurrent states?
    You won't be able to find null recurrent states with a finite number of states. Those are always positive recurrent (or transient). This is connected to the existence of a steady-state...


    The most simple example of a null-recurrent Markov chain is the symmetric random walk on \mathbb{Z}: it is given at time n by S_n=\xi_1+\cdots+\xi_n where \xi_1,\xi_2,\ldots are independent random variables with P(\xi_i=1)=P(\xi_i=-1)=1/2. Here's an elementary proof why it is null recurrent. First I prove that E_0[T_1]=+\infty (time to go from 0 to 1). Either the first step is to the right, and T_1=1; or it is to the left, and the random walk now needs to go from -1 to 0 and then from 0 to 1. The time from -1 to 0 is the same as the time from 0 to 1, so we get:

    E_0[T_1]=\frac{1}{2}1+\frac{1}{2}(1+E_{-1}[T_0]+E_0[T_1])=\frac{1}{2}+E_0[T_1].

    We get a contradition if E_0[T_1]<\infty. Now we deduce that E_0[T_0]=+\infty as well. This is easy: after the first step, we are reduced to the previous problem (going from -1 to 0, or symmetrically from 1 to 0), so that

    E_0[T_0]=\frac{1}{2}E_{-1}[T_0]+\frac{1}{2}E_1[T_0]=E_0[T_1]=\infty.

    (A rigorous justification of the previous equalities would involve the Markov property)
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Markov Chains
    Posted in the Advanced Statistics Forum
    Replies: 5
    Last Post: August 26th 2009, 09:22 AM
  2. Markov Chains
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: April 26th 2009, 09:17 PM
  3. Markov Chains
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: March 18th 2009, 11:53 AM
  4. Markov Chains
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: March 6th 2009, 10:34 AM
  5. Markov Chains
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: November 17th 2008, 10:33 AM

Search Tags


/mathhelpforum @mathhelpforum