Originally Posted by

**shilz222** Let $\displaystyle \bold{X} $ be a discrete random variable whose set of possible values is $\displaystyle \bold{x}_j, \ j \geq 1 $. Let the probability mass function of $\displaystyle \bold{X} $ be given by $\displaystyle P \{\bold{X} = \bold{x}_j \}, \ j \geq 1 $, and suppose we are interested in calculating $\displaystyle \theta = E[h(\bold{X})] = \sum_{j=1}^{\infty} h(\bold{x}_j) P \{\bold{X} = \bold{x}_j \} $.

In some cases, why are Markov Chains better for estimating $\displaystyle \theta $ as opposed to Monte-Carlo simulations? If we wanted to calculate $\displaystyle E[\bold{X}] $ there would not be any need to use simulation at all, right?