Results 1 to 6 of 6

Math Help - number of visits in a non homogeneous Markov chain

  1. #1
    Newbie
    Joined
    Jun 2010
    Posts
    3

    number of visits in a non homogeneous Markov chain

    Hi
    I have a stochastic process \{X_t, t \geq 1\} representing a non homogeneous Markov chain. The transition matrix between X_1 and X_2 is given by
    \begin{pmatrix}A &B& C\cr D&E&F\cr 0&0&1<br />
\end{pmatrix}
    From the time t=2, the transition matrix between X_t and X_{t+1} becomes time independent and is given by
    \begin{pmatrix}G&H&K\cr<br />
L&M&N\cr 0&0&1\end{pmatrix}
    For homgeneous Markov chains, it is well know that
    n_{ij}=(I-Q)^{-1},
    where
    n_{ij} is the number of times the process visits the transient state j before it eventually enters a recurrent state, having initially started from the transient state i.
    I is the identity matrix.
    Q is the transition matrix between transient states.
    How can I obtain the number of visits for my case?
    Can I transform the non homogeneous model to a homogeneous one?
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor
    Joined
    May 2010
    Posts
    1,028
    Thanks
    28
    i assume you mean the expected number of visits.

    If the process is in any given state at time 2, you can use your formula to get a conditional expectation. So just find the probability that the process is in each state at time 2, multiply by the conditional expectation, and sum up over all possible states.
    Last edited by SpringFan25; June 5th 2010 at 07:11 AM.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Newbie
    Joined
    Jun 2010
    Posts
    3
    Hi
    Exactly, I mean the expected number of visits. Below the proof of
    the formula for the homogeneous case. I wish it could help. Assume
    that we have 3 states, state 3 is an absorbing one (to be close to
    my case). let the transition probability matrix be
    <br />
P^{(1)}=\begin{pmatrix} p_{11}&p_{12}&p_{13}\\<br />
p_{21}&p_{22}&p_{23}\\<br />
0&0&1<br />
\end{pmatrix}

    Let N_{ij} be a random variable denoting the number of times
    the process visits the transient state j before it eventually
    enters the absorbing state 3, having initially started from
    state i.
    Let r_{ij}=E(N_{ij}), we have
    Theorem: For transient states i and j,
     \|r_{ij}\|=M=(I-Q)^{-1}).
    Proof:
    If in one step, it enters a recurrent state, the number of visits
    to j is zero unless j=i. If \delta_{ij} is kronecker function
    such that \delta_{ij}=1 if i=j and 0 otherwise. Then
    N_{ij}=\delta_{ij} with probability p_{i3}. On the other hand,
    suppose that the Markov chain moves to a transient state at the
    first step (with probability p_{ir}, from that position onward
    the number of visits to j is N_{rj}. Then,
    <br />
 N_{ij} = \left\{ \begin{array}{cc}<br />
               \delta_{ij} & \mbox{with probability $p_{i3}$}\\<br />
               \delta{ij}+N_{rj} & \mbox{with probability $p_{ir}$, r a transient state.}<br />
            \end{array}<br />
     \right.<br />
    It follows that
    E(N_{ij})=\delta_{ij}p_{i3}+\sum_{r={1,2}}<br />
(\delta_{ij}+E(N_{rj}))*p_{ir}.
    Which leads to
    E(N_{ij})=\delta_{ij}+\sum_{r={1,2}}p_{ir}E(N_{rj}  ).
    Let the matrix A be
    \begin{pmatrix} \sum_{r={1,2}} p_{1r}<br />
E(N_{r_1})&\sum_{r={1,2}} p_{1r} E(N_{r_2})\\<br />
\sum_{r={1,2}} p_{2r} E(N_{r_1})& \sum_{r={1,2}} p_{2r}<br />
E(N_{r_2}).\end{pmatrix}
    The transition matrix between the transient states is
    given by Q=\begin{pmatrix} p_{11}&p_{12}\\<br />
p_{21}&p_{22}\end{pmatrix}.
    Let
    \|r_{ij}\|=\begin{pmatrix}E(N_{11})&E(N_{12})\\E(N  _{21}&<br />
E(N_{22})\end{pmatrix}, then \|r_{ij}\|=I+A where I is the
    identity matrix. By using the fact that A=Q*\|r_{ij}\| it follow
    that \|r_{ij}\|=(I-Q)^{-1}.
    I wish yhis could help.
    If the Markov chain that I described in the beginning was a homogeneous one, then the expected value of the random variable that I needed for my problem would be
    \mathbf{i}(I-Q)^{-1}\mathbf{1}
    with \mathbf{i} is the vector of the probabilies of being in states 1 and 2 at time t=1 and \mathbf{1} is a unit vector.
    Last edited by NELLLY; June 5th 2010 at 09:46 AM.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor
    Joined
    May 2010
    Posts
    1,028
    Thanks
    28
    The proceedure i already gave you should work then.

    If the process is in any given state at time 2, you can use your formula to get a conditional expectation. So just find the probability that the process is in each state at time 2, multiply by the conditional expectation, and sum up over all possible states.
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Newbie
    Joined
    Jun 2010
    Posts
    3
    Hi I don't think I can apply the solution you give. Because, if at the first step, the process moves to state 3 the number of visits to j will be 0. However, if it moves to a transient state the formula may be applied.
    I'm thinking of using two random variables Z and Y, with Z: the expected number of visits before absorption between the time t=1 and t=2.
    Y: The expected number of visits from time 2 to absorption.
    After, I think I need the random variable B=X+Y with B=Z if at the first step it enters state 3, and B=Z+Y if at the first step it enters a transient state.
    This is not yet mathematically very clear for me. I don't know if my idea is correct or not.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor
    Joined
    May 2010
    Posts
    1,028
    Thanks
    28
    i still think my procedure will work so i'll explain it once more and then leave the thread to more experienced contributors:


    Consider the homogenous markov chain defined by the transition matrix
    A=\begin{pmatrix}G&H&K\cr<br />
L&M&N\cr  0&0&1\end{pmatrix}
    I will call this "process A".

    Given that you start in state i, define N_{ij3} as the expected number of visits to state j before reaching state 3. You already know how to calculate N_{ij3} for any values of i and j.


    Now, consider the heterogenous markov chain defined on t=1,2,3,.... with transsition matrix
     B = \begin{pmatrix}A &B& C\cr D&E&F\cr 0&0&1<br />
\end{pmatrix} for the transition between t=1 -> t=2 ; and the matrix A thereafter.


    if you start in state 1, there are only 3 possible outcomes:
    1. You remain in state 1 for 1 period, then begin process A
    2. You Jump to state 2, then begin process A
    3. You Jump to state 3, then begin process A




    You can easily work out the probability of each of these happening by looking at matrix B. I will define the chance of being in state j at time 2 as P_{1j}

    The expected number of visits to state 2 for the whole combined process is then:



    P_{1,2} \times \left(N_{2,2,3}+1 \right)
    +P_{1,1} \times N_{1,2,3}
    +P_{1,3} \times 0

    Where P_{ij} is calculated based on matrix B and N_{i,j,k} is calculated from matrix A.


    The only assumption i have made in this answer is that the process starts in state 1 at time 1, and you can easily modify it if that is not the case.
    Last edited by SpringFan25; June 5th 2010 at 12:15 PM.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Markov Chain of random variables from a primitive markov chain
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: October 19th 2011, 08:12 AM
  2. Markov Chain
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: December 1st 2010, 09:11 PM
  3. Markov Chain
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: November 5th 2010, 08:30 AM
  4. Markov chain
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: September 5th 2009, 10:56 PM
  5. Replies: 2
    Last Post: October 28th 2008, 06:32 PM

Search Tags


/mathhelpforum @mathhelpforum