Originally Posted by

**chiro** Hey lpd.

The first mathematical thing would be that the variance of the estimator would not be consistent which means theres no real point in trying to use it for inference purposes.

For i) it means that if n dominates the variance then the variances tends to 1 which lends itself to issues of consistency.

For ii) we need to consider what a fixed variance implies about this particular estimator if we have full correlation: this means that no matter what we do we get a fixed lower bound for the variance which means that once we get to this point, the uncertainty doesn't improve and thus this the limit of certainty that comes to estimating the mean when we have correlated variables.

If we think of this result it says that at this limit, the distribution of the mean is the same which says that when things are correlated, there is more uncertainty when it comes to figuring out this parameter and that regardless of what we do, we can't get better than this.

If our distribution is from a normal, then it means the best we can do for say a 95% interval (even with enough observations to get close to this interval), is that the best we can do is have our interval for the difference of the parameters between -1.96 and +1.96 (or close enough to it).

So correlated variables actually interfere with getting a consistent estimator of our population mean parameter and this can help demonstrate how the assumption of I.I.D (or at least one that is close enough) is important for consistent estimation of the population mean.