Results 1 to 2 of 2

Math Help - Markov Process

  1. #1
    Junior Member BrooketheChook's Avatar
    Joined
    Sep 2009
    From
    Gold Coast
    Posts
    27

    Markov Process

    Hi,
    I was hoping someone could help me with this question. I am having trouble understanding the generator matrix Q, so this whole question is difficult for me .

    Let X = (X_t) be a Markov process with state space \{1,2,3\} and generator

    Q = \begin{pmatrix} -3 & 2 & 1 \\ 2 & -5 & 3 \\ 1 & 3 & -4 \end{pmatrix}

    (1) Determine the sojourn times in 1,2 and 3

    (2) Let Y = \{Y_n\} be a discrete skeleton of X. Determine the stationary distribution \nu of Y.

    (3) Determine the stationary distribution \pi of X.

    Thanks in advance for any help.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by BrooketheChook View Post
    Hi,
    I was hoping someone could help me with this question. I am having trouble understanding the generator matrix Q, so this whole question is difficult for me .

    Let X = (X_t) be a Markov process with state space \{1,2,3\} and generator

    Q = \begin{pmatrix} -3 & 2 & 1 \\ 2 & -5 & 3 \\ 1 & 3 & -4 \end{pmatrix}

    (1) Determine the sojourn times in 1,2 and 3

    (2) Let Y = \{Y_n\} be a discrete skeleton of X. Determine the stationary distribution \nu of Y.

    (3) Determine the stationary distribution \pi of X.

    Thanks in advance for any help.
    Here's a short vade mecum about these generator matrices.

    You can understand the process X as a discrete Markov chain with exponential holding times before two consecutive jumps. The parameters of the holding times are given by the diagonal of the matrix Q: c(x)=-q_{xx} = \sum_{y\neq x} q_{xy}. And the transitions are given by the proportions between holding times: p_{xy}=\frac{q_{xy}}{\sum_{z\neq x} q_{xz}} (i.e. you normalize the lines (except the diagonal) of the matrix Q). Thus, if Y is the "discrete skeleton" of X (meaning that Y_0=X_0, Y_1 is the state of X after its first jump, etc.), then Y is a Markov chain of transition matrix P given above (of course, p_{xx}=0 due to the definition of Y), and the process X is described as follows: it stays at the initial position during a time \tau_1 which is exponential with parameter c(X_0), then it jumps to Y_1, where it spends a time \tau_2 following an exponential distribution with parameter c(Y_1)=c(X_{\tau_1}), etc.

    It is equivalent to be given Q or both P=(p_{xy})_{x,y} and (c(x))_x.

    The matrix Q itself can be used to describe the process, in the following way. Imagine that each "edge" (i,j) between different states i,j, comes with a "clock" that ticks after an exponential time of parameter q_{ij}. When X is at site i, it waits until the first of the clocks on the neighbouring edges ticks, and jumps across it; then all the clock are set back to 0 to decide the next jump, and so on.

    This gives the following "infinitesimal" description: for i\neq j, P_i(X_h=j)=q_{ij}h + o(h) as h\to 0.

    One last thing: if \nu is a stationary distribution for the discrete-time Markov chain Y (you know how to find that), then \mu(x)=\frac{\nu(x)}{c(x)} defines a stationary measure for X (which you may need to normalize in order to get a probability distribution). Notice that \frac{1}{c(x)} is the expectation of the holding time at site x (expected value of an exponential r.v.), so that going from \nu (holding time = 1) to \mu (exponential holding times) consists in giving more weight to sites where the Markov chain X stays longer. That makes sense.

    --
    So, you should:
    - write down what c(i) is for each site (this is given by the diagonal, or by the sum of the off-diagonal terms); this gives you the parameters of the sojourn times at different sites.
    - write down the matrix P obtained by normalizing the off-diagonal entries of Q (i.e. dividing them by c to get lines summing up to 1) and putting zeroes on the diagonal
    - compute the stationary distribution of Y from P (solving \nu P=\nu...)
    - deduce the stationary distribution of X using the above formula ( \mu(x)=\frac{\nu(x)}{c(x)} and normalization)
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Proof that summation of N Markov processes is a Markov process
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: November 28th 2011, 07:44 AM
  2. Markov Process
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: April 25th 2010, 10:56 AM
  3. Markov process help
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: April 11th 2010, 08:53 AM
  4. Markov Process
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: October 12th 2009, 09:57 AM
  5. Markov Process
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: January 12th 2009, 07:40 AM

Search Tags


/mathhelpforum @mathhelpforum