Results 1 to 2 of 2

Math Help - Metropolis Monte Carlo Autocorrelation

  1. #1
    Newbie
    Joined
    Mar 2009
    Posts
    2

    Metropolis Monte Carlo Autocorrelation

    Hello,

    I do not quite understand yet how to assess the convergence of a metropolis run using the autocorrelation. I think I get the principle, that if we have less autocorrelation in the Markov chain, the obtained samples are more independent.
    For each parameter theta in my model, I calculate the autocorrelation p_k at time-lag k as follows:

    p_k = \frac{Cov(\theta_t, \theta_{t+k})}{Var(\theta_t)} = \frac{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})(\theta_{t-k}-\bar{\theta})}{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})^2}

    where \bar{\theta} is the mean of \theta.

    Like this I calculate the autocorrelations for each parameter for each time lag k. So when I do 100 steps in my Metropolis-Algorithm, I calculate all autocorrelations for k = [0,...100]. Of course, the autocorrelation at lag 0 is 1, and drops to around zero. But in all cases, it drops to around zero, no matter how many steps I do. When I plot the autocorrelation of a MC-Simulation with 1000 steps, then I see that each parameter converges after about 1/4 of the steps. But if I do only 10 steps, I see the same, after about 1/4 of the steps each parameter converges, which cannot be. Where is my mistake in thinking? How can I really use autocorrelation to assess that my ensemble converges?

    Many thanks in advance,
    Greetings Hans
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Banned
    Joined
    Mar 2009
    Posts
    11
    How are the thetas distributed? They obviously are not independent otherwise you won't be calculating the autocorrelations. I suppose they are identically guassian distributed? If so, then what is the joint pdf that you are using to simulate the thetas?


    Quote Originally Posted by straussvogel View Post
    Hello,

    I do not quite understand yet how to assess the convergence of a metropolis run using the autocorrelation. I think I get the principle, that if we have less autocorrelation in the Markov chain, the obtained samples are more independent.
    For each parameter theta in my model, I calculate the autocorrelation p_k at time-lag k as follows:

    p_k = \frac{Cov(\theta_t, \theta_{t+k})}{Var(\theta_t)} = \frac{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})(\theta_{t-k}-\bar{\theta})}{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})^2}

    where \bar{\theta} is the mean of \theta.

    Like this I calculate the autocorrelations for each parameter for each time lag k. So when I do 100 steps in my Metropolis-Algorithm, I calculate all autocorrelations for k = [0,...100]. Of course, the autocorrelation at lag 0 is 1, and drops to around zero. But in all cases, it drops to around zero, no matter how many steps I do. When I plot the autocorrelation of a MC-Simulation with 1000 steps, then I see that each parameter converges after about 1/4 of the steps. But if I do only 10 steps, I see the same, after about 1/4 of the steps each parameter converges, which cannot be. Where is my mistake in thinking? How can I really use autocorrelation to assess that my ensemble converges?

    Many thanks in advance,
    Greetings Hans
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Monte Carlo Integration
    Posted in the Statistics Forum
    Replies: 1
    Last Post: October 12th 2010, 03:52 PM
  2. Monte Carlo's Simulation
    Posted in the Math Software Forum
    Replies: 1
    Last Post: May 19th 2010, 01:19 AM
  3. Monte Carlo method
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: April 14th 2010, 10:57 AM
  4. monte carlo method
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: March 4th 2010, 01:52 PM
  5. Monte Carlo Estimate
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: February 23rd 2010, 11:40 PM

Search Tags


/mathhelpforum @mathhelpforum