# Thread: expectation gaussian function of z, z is normally distributed

1. ## expectation gaussian function of z, z is normally distributed

Hello,
this is my first post. I'm a phd student in Population genetics. I was reading a paper where the following is computed:

${\int_{-\infty}^{\infty} p(z) w(z) dz}$,

where:
$p(z) = \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(z-\mu)^2}{2\sigma^2}}$

and

$w(z) = e^{-\frac{1}{2} \frac{(z-\theta)^2}{\omega^2}}$

The paper that I read (Lande 1983 equations 11a, 11b, Heredity - Abstract of article: The response to selection on major and minor mutations affecting a metrical trait) presents the result

$\hat{w} = c e^{\frac{1}{2}\frac{(\mu - \theta)^2}{\omega^2 + \sigma^2}}$.

where $c$ is just some constant.

I have tried for some hours to derive the last equation myself with no luck.
Could you help me please or provide some literature that I could read and learn how to do it?

best
idaios

2. Hi,

this computation may be a bit tedious indeed, but with some care it keeps being manageable:

We want to compute the integral $I=\int e^{-\frac{(x-\mu)^2}{2\sigma^2}}e^{-\frac{(x-\theta)^2}{2\omega^2}}\frac{dx}{\sqrt{2\pi\sigma^2 }}$.

Expanding the exponent, we get $-\frac{1}{2}Az^2+Bz-\frac{1}{2}C$ where $A=\frac{1}{\sigma^2}+\frac{1}{\omega^2}$, $B=\frac{\mu}{\sigma^2}+\frac{\theta}{\omega^2}$ and $C=\frac{\mu^2}{\sigma^2}+\frac{\theta^2}{\omega^2}$.

The general method to compute $\int e^{-\frac{1}{2}Az^2+Bz-\frac{1}{2}C}dz$ is to "make a square appear" in the exponent: we have $Az^2-2Bz+C=A(z-\frac{B}{A})^2+C-\frac{B^2}{A}$, hence:

$\int e^{-\frac{1}{2}Az^2+Bz-\frac{1}{2}C}dz=e^{-\frac{1}{2}(C-\frac{B^2}{A})}\int e^{-\frac{A}{2}(z-\frac{B}{A})^2}dz$ $=e^{-\frac{1}{2}(C-\frac{B^2}{A})} \int e^{-\frac{A}{2}u^2}du$ (letting $u=z-\frac{B}{A}$), hence for any $A>0, B, C$:

$\int e^{-\frac{1}{2}Az^2+Bz-\frac{1}{2}C}dz=e^{-\frac{1}{2}(C-\frac{B^2}{A})} \sqrt{\frac{2\pi}{A}}$.

You have to divide by $\sqrt{2\pi\sigma^2}$ to get $I$, thus $I=\frac{1}{\sqrt{1+\frac{\sigma^2}{\omega^2}}}e^{-\frac{1}{2}(C-\frac{B^2}{A})}$. All you have to do now is see that $C-\frac{B^2}{A}=\frac{(\mu-\theta)^2}{\sigma^2+\omega^2}$.

In order to make this last step easier, you can notice the following: if we let $a=\frac{1}{\sigma}$, $b=\frac{1}{\omega}$, $c=\frac{\mu}{\sigma}$, $d=\frac{\theta}{\omega}$, then $A=a^2+b^2$, $B=ac+bd$ and $C=c^2+d^2$, and we have the following beautiful identity (for any $a,b,c,d$):

$(a^2+b^2)(c^2+d^2)=(ac+bd)^2+(ad-bc)^2$,

so that $C-\frac{B^2}{A}=\frac{AC-B^2}{A}=\frac{(ad-bc)^2}{A}$ $=\frac{\left(\frac{\mu}{\sigma\omega}-\frac{\theta}{\sigma\omega}\right)^2}{\frac{1}{\si gma^2}+\frac{1}{\omega^2}}=\frac{(\mu-\sigma)^2}{\omega^2+\sigma^2}$. qed.

About the previous identity with a,b,c,d (dating back to Euler): it can be seen as an expression of the formula $|z|^2|z'|^2=|zz'|^2$ where $z=a+ib$, $z'=c+id$. Of course this identity is not absolutely necessary here, you can just expand and see many terms simplify. There is a reason why this identity is involved: the integral can be written $\int e^{-\frac{1}{2}|uz-v|^2}dz$ where $u=\frac{1}{\sigma}+i\frac{1}{\omega}$, $v=\frac{\mu}{\sigma}-i\frac{\theta}{\omega}$. But the justification of the computation is more delicate if we do it this way.

As a conclusion, $\int e^{-\frac{(x-\mu)^2}{2\sigma^2}}e^{-\frac{(x-\theta)^2}{2\omega^2}}\frac{dx}{\sqrt{2\pi\sigma^2 }}=\frac{1}{\sqrt{1+\frac{\sigma^2}{\omega^2}}}e^{-\frac{(\mu-\theta)^2}{2(\sigma^2+\omega^2)}}$.

Feel free to ask for details if something's unclear.

3. I've just noticed I have another less elementary but very short and very good reason for this answer...

Remember that the sum of independent Gaussian random variables is a Gaussian whose mean and variance are the sums of those of the summands (this is easily checked using characteristic functions). Remember as well that the probability density function (pdf) of the sum of independent random variables is the convolution of their pdf's.

Summing this up, we get, for all $x\in\mathbb{R}$, considering the sum of $\mathcal{N}(\mu,\sigma^2)$ and $\mathcal{N}(-\theta,\omega^2)$:

$\int e^{-\frac{(z-\mu)^2}{2\sigma^2}}e^{-\frac{((x-z)+\theta)^2}{2\omega^2}}\frac{dz}{\sqrt{2\pi\sigm a^2}\sqrt{2\pi\omega^2}}$ $=\frac{1}{\sqrt{2\pi(\sigma^2+\omega^2)}}e^{-\frac{(x-(\mu-\theta))^2}{2(\sigma^2+\omega^2)}}$.

Now let $x=0$.

(Remember the first method with $e^{-Az^2+Bz+C}$ anyway since it is of wider use than the above trick)

4. Originally Posted by Laurent
I've just noticed I have another less elementary but very short and very good reason for this answer...

Remember that the sum of independent Gaussian random variables is a Gaussian whose mean and variance are the sums of those of the summands (this is easily checked using characteristic functions). Remember as well that the probability density function (pdf) of the sum of independent random variables is the convolution of their pdf's.

Summing this up, we get, for all $x\in\mathbb{R}$, considering the sum of $\mathcal{N}(\mu,\sigma^2)$ and $\mathcal{N}(-\theta,\omega^2)$:

$\int e^{-\frac{(z-\mu)^2}{2\sigma^2}}e^{-\frac{((x-z)+\theta)^2}{2\omega^2}}\frac{dz}{\sqrt{2\pi\sigm a^2}\sqrt{2\pi\omega^2}}$ $=\frac{1}{\sqrt{2\pi(\sigma^2+\omega^2)}}e^{-\frac{(x-(\mu-\theta))^2}{2(\sigma^2+\omega^2)}}$.

Now let $x=0$.

(Remember the first method with $e^{-Az^2+Bz+C}$ anyway since it is of wider use than the above trick)
Now that's nice.

I had the earlier proof but just just couldn't muster the inclination to type it up. Especially since I was sure there was a simpler proof given that both functions were Gaussians, but hadn't had the chance to think past the main idea of using a convolution.

Beautiful posts, Laurent.There really should be a Best Statanalysis Award ....

(By the way, those who wonder whether Statistics is mathematics might do well to extrapolate their wondering and wonder whether analysis is mathematics ....)

5. Thanks a lot Laurent!!!