Stationary vs. weakly dependent time series
I am a little bit confused between the two terms stationarity and weak dependence. Can someone explain to me the difference between these two concepts?
From my point of view (covariance) stationarity means that
Ok, so far I can follow. The definition of weak dependence:
I also understand that. But, most textbooks state that the correlation has to approach zero sufficiently quickly. What does "sufficiently" mean?
Now here is my confusion. When a time series is not weakly dependent, i.e. the does not approach zero when h approaches infinity, does that mean that the time series is also not stationary? If so, what is the difference between these two. Can there be time series that are stationary but not weakly dependent? Can there be time series that are not stationary but weakly dependent? Can there be time series that are stationary and weakly dependent?
What is a good way to test for stationarity? Dickey/Fuller does the job in my opinion. But what about weak dependence? How do we test for that?
I am sorry, I couldn't come up with one single question, because I don't really get the whole concept yet. So I'll be thankful for any comments that clarify the difference between stationarity and weak dependece. Thanks a lot.
Re: Stationary vs. weakly dependent time series
With stationarity, you have weak stationarity and strong stationarity. The strong stationarity has all the properties of weak stationarity, but it has the additional constraint that the joint distribution of some indexed collection of variables is the same as if you shifted the index along some finite number of times. You could think of this in terms of the periodicity of joint distribution at some lag point h.
Weak stationarity has mean = 0 everywhere and auto-correlation with respect to some lag have particular properties (I'd suggest you look at your textbook for these).
If you are studying time series then you will have to get used to analysis like plotting auto-correlation, partial-correlation, as well as applying filters to data.
The point of doing these (along with a lot of transformations) is to see how the data look in the context of a particular model. If the co-efficients match up to what is expected (or close to expected) then you can take that as a sign that it seems to fit a particular model.
Its the same as if you say assumed your sample came from a normal distribution with some mean mu and sigma^2. You can perform a Shapiro-Wilk test to gauge normality and you can do a t-test or a normal test for seeing what the estimators for the mean and sigma^2 are. In time series, its the same idea but the tools are a bit different (ACF, PACF, transfer functions, filters, and so on).
You will need to provide some constraints just like you did in the above non time-series example and then proceed to use the relevant tools for that model to analyze it.