# Math Help - Why would you use Markov Chains instead of Monte Carlo Simulation?

1. ## Why would you use Markov Chains instead of Monte Carlo Simulation?

Let $\bold{X}$ be a discrete random variable whose set of possible values is $\bold{x}_j, \ j \geq 1$. Let the probability mass function of $\bold{X}$ be given by $P \{\bold{X} = \bold{x}_j \}, \ j \geq 1$, and suppose we are interested in calculating $\theta = E[h(\bold{X})] = \sum_{j=1}^{\infty} h(\bold{x}_j) P \{\bold{X} = \bold{x}_j \}$.

In some cases, why are Markov Chains better for estimating $\theta$ as opposed to Monte-Carlo simulations? If we wanted to calculate $E[\bold{X}]$ there would not be any need to use simulation at all, right?

2. Originally Posted by shilz222
Let $\bold{X}$ be a discrete random variable whose set of possible values is $\bold{x}_j, \ j \geq 1$. Let the probability mass function of $\bold{X}$ be given by $P \{\bold{X} = \bold{x}_j \}, \ j \geq 1$, and suppose we are interested in calculating $\theta = E[h(\bold{X})] = \sum_{j=1}^{\infty} h(\bold{x}_j) P \{\bold{X} = \bold{x}_j \}$.

In some cases, why are Markov Chains better for estimating $\theta$ as opposed to Monte-Carlo simulations? If we wanted to calculate $E[\bold{X}]$ there would not be any need to use simulation at all, right?

1. Speed of convergence and bounds on the error
2. It may be that the Markov chain summation can be done analytically then zero error.

RonL

3. Both use the law of large numbers right: $\lim_{n \to \infty} \frac{h(\bold{x}_j)}{n} \approx \theta$? Why do we care about the reversible states of Markov processes that are irreversible?