# Thread: Metropolis Monte Carlo Autocorrelation

1. ## Metropolis Monte Carlo Autocorrelation

Hello,

I do not quite understand yet how to assess the convergence of a metropolis run using the autocorrelation. I think I get the principle, that if we have less autocorrelation in the Markov chain, the obtained samples are more independent.
For each parameter $theta$ in my model, I calculate the autocorrelation $p_k$ at time-lag $k$ as follows:

$p_k = \frac{Cov(\theta_t, \theta_{t+k})}{Var(\theta_t)} = \frac{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})(\theta_{t-k}-\bar{\theta})}{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})^2}$

where $\bar{\theta}$ is the mean of $\theta$.

Like this I calculate the autocorrelations for each parameter for each time lag $k$. So when I do 100 steps in my Metropolis-Algorithm, I calculate all autocorrelations for $k = [0,...100]$. Of course, the autocorrelation at lag 0 is 1, and drops to around zero. But in all cases, it drops to around zero, no matter how many steps I do. When I plot the autocorrelation of a MC-Simulation with 1000 steps, then I see that each parameter converges after about 1/4 of the steps. But if I do only 10 steps, I see the same, after about 1/4 of the steps each parameter converges, which cannot be. Where is my mistake in thinking? How can I really use autocorrelation to assess that my ensemble converges?

Greetings Hans

2. How are the thetas distributed? They obviously are not independent otherwise you won't be calculating the autocorrelations. I suppose they are identically guassian distributed? If so, then what is the joint pdf that you are using to simulate the thetas?

Originally Posted by straussvogel
Hello,

I do not quite understand yet how to assess the convergence of a metropolis run using the autocorrelation. I think I get the principle, that if we have less autocorrelation in the Markov chain, the obtained samples are more independent.
For each parameter $theta$ in my model, I calculate the autocorrelation $p_k$ at time-lag $k$ as follows:

$p_k = \frac{Cov(\theta_t, \theta_{t+k})}{Var(\theta_t)} = \frac{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})(\theta_{t-k}-\bar{\theta})}{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})^2}$

where $\bar{\theta}$ is the mean of $\theta$.

Like this I calculate the autocorrelations for each parameter for each time lag $k$. So when I do 100 steps in my Metropolis-Algorithm, I calculate all autocorrelations for $k = [0,...100]$. Of course, the autocorrelation at lag 0 is 1, and drops to around zero. But in all cases, it drops to around zero, no matter how many steps I do. When I plot the autocorrelation of a MC-Simulation with 1000 steps, then I see that each parameter converges after about 1/4 of the steps. But if I do only 10 steps, I see the same, after about 1/4 of the steps each parameter converges, which cannot be. Where is my mistake in thinking? How can I really use autocorrelation to assess that my ensemble converges?