Originally Posted by

**straussvogel** Hello,

I do not quite understand yet how to assess the convergence of a metropolis run using the autocorrelation. I think I get the principle, that if we have less autocorrelation in the Markov chain, the obtained samples are more independent.

For each parameter $\displaystyle theta$ in my model, I calculate the autocorrelation $\displaystyle p_k$ at time-lag $\displaystyle k$ as follows:

$\displaystyle p_k = \frac{Cov(\theta_t, \theta_{t+k})}{Var(\theta_t)} = \frac{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})(\theta_{t-k}-\bar{\theta})}{\sum_{t=1}^{n-k}(\theta_t - \bar{\theta})^2}$

where $\displaystyle \bar{\theta}$ is the mean of $\displaystyle \theta$.

Like this I calculate the autocorrelations for each parameter for each time lag $\displaystyle k$. So when I do 100 steps in my Metropolis-Algorithm, I calculate all autocorrelations for $\displaystyle k = [0,...100]$. Of course, the autocorrelation at lag 0 is 1, and drops to around zero. But in all cases, it drops to around zero, no matter how many steps I do. When I plot the autocorrelation of a MC-Simulation with 1000 steps, then I see that each parameter converges after about 1/4 of the steps. But if I do only 10 steps, I see the same, after about 1/4 of the steps each parameter converges, which cannot be. Where is my mistake in thinking? How can I really use autocorrelation to assess that my ensemble converges?

Many thanks in advance,

Greetings Hans