Hi, I was wondering if you could help me. I'm a little stuck with marrying up the ideas of autocorrelation from a statisticians point of view and an engineers point of view. I am neither of these but need to understand how they relate to one another.

The statisticians point of view:

Chris Chatfield defines in 'The Analysis of Time Series' p23 the autocovariance coefficients at lag k for a discrete time series $\displaystyle \left\{x_t\right\}_{t=1}^N$,

$\displaystyle c_k=\frac{1}{N}\sum_{t=1}^{N}(x_t-\overline{x})(x_{[t+k]mod N}-\overline{x})$

I'm taking my time series to have period N here. Then the autocorrelation at lag k is defined as,

$\displaystyle r_k=\frac{c_k}{c_0}$

The autocorrelation is normalised with respect to the autocorrelation at lag 0 and centred about the mean. I understand that the method looks to identify any linear relationships present in the data at differing lag times.

Another definition in the same book is the autocorrelation function,

$\displaystyle \frac{Cov[X(t),X(t+\tau)]}{Var[X(t)]}$

Is this the form of the autocorrelation at lag k, but for lag $\displaystyle \tau $with an infinite time series? Can I use both forms for unrealised random variables?

The engineers define the autocorrelation in 'Digital Signal Processing' by Sanjit Mitra p105, at lag k to be,

$\displaystyle r_k(x)=\frac{1}{N}\sum_{t=1}^Nx_tx_{[t-k]modN}$

This form isn't centred about the mean or normalised and is defined with the lag backwards in time. I believe the engineers are trying to check for patterns in the time series just the same, but only wish to check whether the result is associated with white noise or not. Does anyone know if this is really what is being done and I'm not missing something?

Thanks :-)