With stationarity, you have weak stationarity and strong stationarity. The strong stationarity has all the properties of weak stationarity, but it has the additional constraint that the joint distribution of some indexed collection of variables is the same as if you shifted the index along some finite number of times. You could think of this in terms of the periodicity of joint distribution at some lag point h.
Weak stationarity has mean = 0 everywhere and auto-correlation with respect to some lag have particular properties (I'd suggest you look at your textbook for these).
If you are studying time series then you will have to get used to analysis like plotting auto-correlation, partial-correlation, as well as applying filters to data.
The point of doing these (along with a lot of transformations) is to see how the data look in the context of a particular model. If the co-efficients match up to what is expected (or close to expected) then you can take that as a sign that it seems to fit a particular model.
Its the same as if you say assumed your sample came from a normal distribution with some mean mu and sigma^2. You can perform a Shapiro-Wilk test to gauge normality and you can do a t-test or a normal test for seeing what the estimators for the mean and sigma^2 are. In time series, its the same idea but the tools are a bit different (ACF, PACF, transfer functions, filters, and so on).
You will need to provide some constraints just like you did in the above non time-series example and then proceed to use the relevant tools for that model to analyze it.