Originally Posted by

**heathrowjohnny** For a sample size of $\displaystyle n = 10 $ I simulated the roll of a dice with $\displaystyle k = 200 $ replications.

My sample mean and variance of the mean were $\displaystyle \bar{x} = 1.131, \ \ s_{\bar{x}}^{2} = 1.28 $

And $\displaystyle \mu = 3.5, \ \ \sigma^{2} = 2.917, \ \ \ \frac{\sigma^2}{10} = 0.2917 $.

$\displaystyle s_{\bar{x}}^{2} > 0.2917 $. Is this because my sample size $\displaystyle n = 10 $ was too small, and thats why they differ by a lot?