Results 1 to 5 of 5

Math Help - expectation gaussian function of z, z is normally distributed

  1. #1
    Newbie
    Joined
    Jan 2010
    Posts
    5

    expectation gaussian function of z, z is normally distributed

    Hello,
    this is my first post. I'm a phd student in Population genetics. I was reading a paper where the following is computed:

    {\int_{-\infty}^{\infty} p(z) w(z) dz},

    where:
     p(z) = \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(z-\mu)^2}{2\sigma^2}}

    and

     w(z) = e^{-\frac{1}{2} \frac{(z-\theta)^2}{\omega^2}}


    The paper that I read (Lande 1983 equations 11a, 11b, Heredity - Abstract of article: The response to selection on major and minor mutations affecting a metrical trait) presents the result

    \hat{w} = c e^{\frac{1}{2}\frac{(\mu - \theta)^2}{\omega^2 + \sigma^2}}.

    where c is just some constant.

    I have tried for some hours to derive the last equation myself with no luck.
    Could you help me please or provide some literature that I could read and learn how to do it?


    best
    idaios
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Hi,

    this computation may be a bit tedious indeed, but with some care it keeps being manageable:

    We want to compute the integral I=\int e^{-\frac{(x-\mu)^2}{2\sigma^2}}e^{-\frac{(x-\theta)^2}{2\omega^2}}\frac{dx}{\sqrt{2\pi\sigma^2  }}.

    Expanding the exponent, we get -\frac{1}{2}Az^2+Bz-\frac{1}{2}C where A=\frac{1}{\sigma^2}+\frac{1}{\omega^2}, B=\frac{\mu}{\sigma^2}+\frac{\theta}{\omega^2} and C=\frac{\mu^2}{\sigma^2}+\frac{\theta^2}{\omega^2}.

    The general method to compute \int e^{-\frac{1}{2}Az^2+Bz-\frac{1}{2}C}dz is to "make a square appear" in the exponent: we have Az^2-2Bz+C=A(z-\frac{B}{A})^2+C-\frac{B^2}{A}, hence:

    \int e^{-\frac{1}{2}Az^2+Bz-\frac{1}{2}C}dz=e^{-\frac{1}{2}(C-\frac{B^2}{A})}\int e^{-\frac{A}{2}(z-\frac{B}{A})^2}dz =e^{-\frac{1}{2}(C-\frac{B^2}{A})} \int e^{-\frac{A}{2}u^2}du (letting u=z-\frac{B}{A}), hence for any A>0, B, C:

    \int e^{-\frac{1}{2}Az^2+Bz-\frac{1}{2}C}dz=e^{-\frac{1}{2}(C-\frac{B^2}{A})} \sqrt{\frac{2\pi}{A}}.

    You have to divide by \sqrt{2\pi\sigma^2} to get I, thus I=\frac{1}{\sqrt{1+\frac{\sigma^2}{\omega^2}}}e^{-\frac{1}{2}(C-\frac{B^2}{A})}. All you have to do now is see that C-\frac{B^2}{A}=\frac{(\mu-\theta)^2}{\sigma^2+\omega^2}.

    In order to make this last step easier, you can notice the following: if we let a=\frac{1}{\sigma}, b=\frac{1}{\omega}, c=\frac{\mu}{\sigma}, d=\frac{\theta}{\omega}, then A=a^2+b^2, B=ac+bd and C=c^2+d^2, and we have the following beautiful identity (for any a,b,c,d):

    (a^2+b^2)(c^2+d^2)=(ac+bd)^2+(ad-bc)^2,

    so that C-\frac{B^2}{A}=\frac{AC-B^2}{A}=\frac{(ad-bc)^2}{A} =\frac{\left(\frac{\mu}{\sigma\omega}-\frac{\theta}{\sigma\omega}\right)^2}{\frac{1}{\si  gma^2}+\frac{1}{\omega^2}}=\frac{(\mu-\sigma)^2}{\omega^2+\sigma^2}. qed.

    About the previous identity with a,b,c,d (dating back to Euler): it can be seen as an expression of the formula |z|^2|z'|^2=|zz'|^2 where z=a+ib, z'=c+id. Of course this identity is not absolutely necessary here, you can just expand and see many terms simplify. There is a reason why this identity is involved: the integral can be written \int e^{-\frac{1}{2}|uz-v|^2}dz where u=\frac{1}{\sigma}+i\frac{1}{\omega}, v=\frac{\mu}{\sigma}-i\frac{\theta}{\omega}. But the justification of the computation is more delicate if we do it this way.

    As a conclusion, \int e^{-\frac{(x-\mu)^2}{2\sigma^2}}e^{-\frac{(x-\theta)^2}{2\omega^2}}\frac{dx}{\sqrt{2\pi\sigma^2  }}=\frac{1}{\sqrt{1+\frac{\sigma^2}{\omega^2}}}e^{-\frac{(\mu-\theta)^2}{2(\sigma^2+\omega^2)}}.

    (NB: there is a minus sign in front of the exponent of the answer; it wasn't in your post)

    Feel free to ask for details if something's unclear.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    I've just noticed I have another less elementary but very short and very good reason for this answer...

    Remember that the sum of independent Gaussian random variables is a Gaussian whose mean and variance are the sums of those of the summands (this is easily checked using characteristic functions). Remember as well that the probability density function (pdf) of the sum of independent random variables is the convolution of their pdf's.

    Summing this up, we get, for all x\in\mathbb{R}, considering the sum of \mathcal{N}(\mu,\sigma^2) and \mathcal{N}(-\theta,\omega^2):

    \int e^{-\frac{(z-\mu)^2}{2\sigma^2}}e^{-\frac{((x-z)+\theta)^2}{2\omega^2}}\frac{dz}{\sqrt{2\pi\sigm  a^2}\sqrt{2\pi\omega^2}} =\frac{1}{\sqrt{2\pi(\sigma^2+\omega^2)}}e^{-\frac{(x-(\mu-\theta))^2}{2(\sigma^2+\omega^2)}}.

    Now let x=0.

    (Remember the first method with e^{-Az^2+Bz+C} anyway since it is of wider use than the above trick)
    Last edited by Laurent; January 4th 2010 at 10:07 AM. Reason: small typo
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Flow Master
    mr fantastic's Avatar
    Joined
    Dec 2007
    From
    Zeitgeist
    Posts
    16,948
    Thanks
    5
    Quote Originally Posted by Laurent View Post
    I've just noticed I have another less elementary but very short and very good reason for this answer...

    Remember that the sum of independent Gaussian random variables is a Gaussian whose mean and variance are the sums of those of the summands (this is easily checked using characteristic functions). Remember as well that the probability density function (pdf) of the sum of independent random variables is the convolution of their pdf's.

    Summing this up, we get, for all x\in\mathbb{R}, considering the sum of \mathcal{N}(\mu,\sigma^2) and \mathcal{N}(-\theta,\omega^2):

    \int e^{-\frac{(z-\mu)^2}{2\sigma^2}}e^{-\frac{((x-z)+\theta)^2}{2\omega^2}}\frac{dz}{\sqrt{2\pi\sigm  a^2}\sqrt{2\pi\omega^2}} =\frac{1}{\sqrt{2\pi(\sigma^2+\omega^2)}}e^{-\frac{(x-(\mu-\theta))^2}{2(\sigma^2+\omega^2)}}.

    Now let x=0.

    (Remember the first method with e^{-Az^2+Bz+C} anyway since it is of wider use than the above trick)
    Now that's nice.

    I had the earlier proof but just just couldn't muster the inclination to type it up. Especially since I was sure there was a simpler proof given that both functions were Gaussians, but hadn't had the chance to think past the main idea of using a convolution.

    Beautiful posts, Laurent.There really should be a Best Statanalysis Award ....


    (By the way, those who wonder whether Statistics is mathematics might do well to extrapolate their wondering and wonder whether analysis is mathematics ....)
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Newbie
    Joined
    Jan 2010
    Posts
    5
    Thanks a lot Laurent!!!
    beautiful answer.

    best
    pavlos
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. linearise gaussian function
    Posted in the Statistics Forum
    Replies: 0
    Last Post: April 20th 2011, 04:32 AM
  2. Replies: 1
    Last Post: March 30th 2011, 12:11 AM
  3. Gaussian vector and expectation
    Posted in the Advanced Statistics Forum
    Replies: 2
    Last Post: December 15th 2009, 01:02 PM
  4. Probability Question on jointly distributed function
    Posted in the Advanced Statistics Forum
    Replies: 6
    Last Post: May 18th 2009, 03:24 PM
  5. Gaussian PDF - Expectation Value
    Posted in the Calculus Forum
    Replies: 1
    Last Post: November 22nd 2008, 03:19 AM

Search Tags


/mathhelpforum @mathhelpforum