an estimator, by definition is a statistic, hence its a function of the data, which is the sample

so it's a random variable and it has a distribution.

Which means it has a mean and a variance.

If its mean, its expectation is equal to the unknown parameter(which is a constant)

then that estimator is said to be unbiased.

Next we look at its variance.

And we like to see its long term behaviour.

That limit is the asymptotic variance.

What you seem to want is Fischer's...

http://ocw.mit.edu/courses/mathemati...s/lecture3.pdf