Converting between conditional distributions for Gibbs sampling using Bayes Theorem

May 2010
1
0
Hi,

I am working on writing a Gibbs sampler to do hypothesis testing in a Bayesian framework. I have a set of data which give the number of occurrences \(\displaystyle x\) in a sample of \(\displaystyle n\) trials. The particular model of interest is a three-layered hierarchical model with the following distributions:

1) The number of occurrences \(\displaystyle x\) given \(\displaystyle p\) is drawn from a Binomial distribution, \(\displaystyle f_{X|P}(x|p) = {n \choose x}p^{x}(1-p)^{n-x}\).

2) The distribution of \(\displaystyle p\) given the hyperparameter \(\displaystyle m\) is a Beta distribution, \(\displaystyle f_{P|M}(p|m) = \frac{\Gamma(m)}{\Gamma(mq)\Gamma(m(1-q))} \; p^{mq-1}(1-p)^{m(1-q)-1}\).
Here \(\displaystyle q\) is a known constant between 0 and 1. Note that the shape paramters \(\displaystyle \alpha = mq\) and \(\displaystyle \beta=m(1-q)\) are chosen so that the mean is \(\displaystyle q\).

3) \(\displaystyle m\) is drawn from a uniform distribution \(\displaystyle f_{M}(m)\) on \(\displaystyle (\ell,\infty)\) where \(\displaystyle \ell=\max(1/q,1/(1-q))\) which ensures that the distribution \(\displaystyle f_{P|M}(p|m)\) is concave.

If I am understanding the Gibbs sampling procedure correctly, it goes something like this.
Start by generating a value for \(\displaystyle m\) from the uniform distribution. Then:
1) Given this \(\displaystyle m\) generate a value for \(\displaystyle p\) from \(\displaystyle f_{P|M}(p|m)\).
2) Given \(\displaystyle p\) generate a value for \(\displaystyle x\) from \(\displaystyle f_{X|P}(x|p)\).
3) Given \(\displaystyle x\) generate a new value for \(\displaystyle p\) from \(\displaystyle f_{P|X}(p|x)\).
4) Given this new \(\displaystyle p\) generate a new value for \(\displaystyle m\) from \(\displaystyle f_{M|P}(m|p)\).
5) Repeat steps 1-4.

I am having trouble with step 4, since I need the conditional distribution \(\displaystyle f_{M|P}(m|p)\). I should be able to get this from Bayes theorem, since \(\displaystyle f_{M|P}(m|p) = \frac{f_{P|M}(p|m)f_{M}(m)}{\int f_{P|M}(p|m)f_{M}(m)\mathrm{d}m}\).

I have been unable to calculate the normalization integral that appears in the denominator above:

\(\displaystyle \int_{\ell}^{\infty} \; \frac{\Gamma(m)}{\Gamma(mq)
\Gamma(m(1-q))} \;p^{mq-1}(1-p)^{m(1-q)-1}\; \mathrm{d}m\).

Question 1: Does anyone know of a method to perform this integration of the Beta distribution with respect to \(\displaystyle m\) analytically? I have been unable to do so or find much on integration of the Beta distribution with respect to the shape parameter.

Question 2: For those readers who are familiar with Gibbs sampling and the Bayesian framework, please feel free to comment on the method of approach I outlined here. I am rather new at this and not entirely confident in the way I am attempting to do the sampling.

Thanks in advance,

basmati