Results 1 to 9 of 9

Math Help - Conditional mean with a bivariate lognormal

  1. #1
    Newbie
    Joined
    Jan 2009
    Posts
    7

    Conditional mean with a bivariate lognormal

    I know that if X and Y are independent, then E(X | X \geq Y) \geq E(X) . Now I am trying to prove the result with some correlated distributions. I have a very simple proof if (X,Y) is bivariate normal but I am struggling to prove it for (X,Y) bivariate lognormal. Any suggestion?
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Newbie
    Joined
    Jan 2009
    Posts
    7
    Let me try to make the problem more "intriguing". The inequality  E(X | X \geq Y) \geq E(X) does not hold in general (it is very easy to build counterexamples). BUT it always holds for independent variables. Here is a sketch of a proof:
    For any variables X and Y it holds that
    E(X)=Pr(X<Y) E(X | X<Y) + Pr(X \geq Y) E(X | X \geq Y)
    Now consider that, for any constant y:
    E(X | X<y) \leq y \leq E(X | X \geq y)
    By integrating with respect to the distribution of Y, and using independence, I get to the point.
    (A rigorous proof is somewhat longer, but at least you see the way to get there)

    Now can we prove my inequality for some families of correlated variables?
    With the normal we can (a proof is coming right away)
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Newbie
    Joined
    Jan 2009
    Posts
    7
    Here is the second part.
    If ( X_{1} ,X_{2} ) is a bivariate normal distribution with the mean and variance of  X_{i} equal to  \mu_{i} and with variables having same variances (equal to \sigma^2) and a correlation given by \rho, then it is easy to verify that (U,V) with U= X_{1} + X_{2} and V= X_{1} - X_{2} is bivariate normally distributed and U and V are independent from each other (I find that this is so cool!). Then you can write: X_{1}=(U+V)/2, X_1 \geq X_2 as V \geq 0 , and, using the properties of the truncated normal, you are done. The result is (if there are not typos):
    E(X_{1} | X_{1} \geq X_{2} ) = \mu_{1} + 2^{-1/2} \sigma (1- \rho )^{1/2} \ast \lambda(\alpha)
    where \alpha = ( \mu_{2}- \mu_{1})/[2 \sigma ^{2} (1- \rho ) ]^{1/2}
    and \lambda is the hazard function of the normal distribution
    (so it follows E(X_{1} | X_{1} \geq X_{2} ) = \mu_{1} )
    So from here we get to my question that was: is it possible that the result holds for the bivariate normal, but not for the bivariate lognormal?
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Newbie
    Joined
    Jan 2009
    Posts
    7
    So here is what I have done so far with the lognormal.
    If Y_{1}= \exp (X_{1}), X_{2}= \exp (Z_{2}), and (X_{1},X_{2}) is MultiNormal \sim( \mu_{1},\mu_{2}, \sigma^2 , \sigma^2, \rho) - where I have taken same variances, as in the previous post - then, by definition, (Y_{1},Y_{2}), is a bivariate lognormal distribution. I want to show that E(Y_{1}|Y_{1} \geq Y_{2} ) \geq E(Y_{1}). To do it, it is easier to show that:
    E(Y_{1}|Y_{1} \geq Y_{2} ) \geq E(Y_{1}|Y_{1} \leq Y_{2})
    Now Y_{1}|Y_{1} \geq Y_{2} and Y_{1}|Y_{1} \leq Y_{2} are both lognormal and their means are (if I have not made mistakes with the truncated normal):
     \exp ( M_{1} + S_{1} ^2 /2), and
     \exp ( m_{1} + s_{1} ^2 /2), with:
    M_{1} = \mu_{1} + 2^{-1/2} \sigma (1- \rho )^{1/2} \ast \lambda(\alpha)
    m_{1} = \mu_{1} - 2^{-1/2} \sigma (1- \rho )^{1/2} \ast \lambda(-\alpha)
    S_{1} ^2 = \sigma^2 [1-(1- \rho) \ast \delta(\alpha)/2]
    s_{1} ^2 = \sigma^2 [1-(1- \rho) \ast \delta(-\alpha)/2]
    where:
    \alpha = ( \mu_{2}- \mu_{1})/[2 \sigma ^{2} (1- \rho ) ]^{1/2}
    \lambda is the hazard funtion of the standard normal (that is: \phi/(1-\Phi))
    \delta(x)=\lambda(x)[\lambda(x)-x]
    Now the problem is clear. With the conditioning Y_{1} \geq Y_{2} the mean of the underlying normal increases, but the variance decreases, and assessing the net effect is not easy.
    The inequality clearly holds if \mu_{1}=\mu_{2} (that is: \alpha=0), but what if \mu_{1} \neq \mu_{2}??
    Hey Mr. Fantastic I need your help with those nasty lambda and delta functions. I am sure you can handle them.
    Follow Math Help Forum on Facebook and Google+

  5. #5
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by EconMax View Post
    I know that if X and Y are independent, then E(X | X \geq Y) \geq E(X) . Now I am trying to prove the result with some correlated distributions. I have a very simple proof if (X,Y) is bivariate normal but I am struggling to prove it for (X,Y) bivariate lognormal. Any suggestion?
    Hi,
    what do you think of the following?:

    Suppose (X,Y) is a Gaussian random vector, with common variance \sigma^2 and correlation \rho.

    Then {\rm Cov}(X,Y-\rho X)={\rm Cov}(X,Y)-\rho{\rm Var}(X)=\rho\sigma^2-\rho\sigma^2=0, so that X and Y-\rho X are independent (since we're dealing with a Gaussian r.vector). Notice that X\geq Y is equivalent to (1-\rho)X\geq Y-\rho X, so that we get:
    E[X|X\geq Y]=E[X|X\geq\frac{1}{1-\rho}(Y-\rho X)],
    and by the result in the case of independent random variables we deduce E[X|X\geq Y]\geq E[X].

    All the more, this works similarly for lognormal random variables: e^X\geq e^Y is equivalent to e^{(1-\rho)X}\geq e^{Y-\rho X}, or to e^X \geq e^{(Y-\rho X)/(1-\rho)} (since \rho<1) and the r.v. on the two sides are independent.

    By the way, I'd be interested to see the full proof of the case of independent r.v. (or have a reference for that): how do you go from "for all y, E[X|X\geq y]\geq E[X|X<y]" to " E[X|X\geq Y]\geq E[X|X<Y]"?
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Newbie
    Joined
    Jan 2009
    Posts
    7
    Your argument for the normal is extremely elegant. I like it a lot.

    I have not understood how to work out the lognormal, however. I am really confused. The X and Y you wrote on those lines, do I have to read that X as X|X \geq Y and Y as X? Because if it so, then the argument would not work because X|X \geq Y and X have different variances, so you would need a different rescaling for making them independent (but I things might get complicated). I think I am missing something, sorry about that.

    I attach an argument for the conditional mean under independence. It is not very nice, sorry. I think the proof is a little longer than what could be.
    Attachment 9904
    By the way, I could not find any reference on any book about that result, so should you find a reference, I would appreciate if you could let me know.
    Attached Files Attached Files
    Follow Math Help Forum on Facebook and Google+

  7. #7
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by EconMax View Post
    Your argument for the normal is extremely elegant. I like it a lot.

    I have not understood how to work out the lognormal, however. I am really confused. The X and Y you wrote on those lines, do I have to read that X as X|X \geq Y and Y as X? Because if it so, then the argument would not work because X|X \geq Y and X have different variances, so you would need a different rescaling for making them independent (but I things might get complicated). I think I am missing something, sorry about that.

    I attach an argument for the conditional mean under independence. It is not very nice, sorry. I think the proof is a little longer than what could be.
    Attachment 9904
    By the way, I could not find any reference on any book about that result, so should you find a reference, I would appreciate if you could let me know.
    Thanks for the proof, I'll read it right away.

    For the log-normal, here's what I meant: (X,Y) is like before, while the bivariate log-normal is (\widetilde{X},\widetilde{Y})=(e^X,e^Y). Like before, X and Y-\rho X are independent, so that \widetilde{X} and \widetilde{Z}=e^{Y-\rho X} are independent as well. In addition, \widetilde{X}\geq\widetilde{Y} iff \widetilde{X}\geq \widetilde{Z}^{\frac{1}{1-\rho}} (this is what I wrote, with different notations). As a consequence, E[\widetilde{X}|\widetilde{X}\geq \widetilde{Y}]=E[\widetilde{X}|\widetilde{X}\geq \widetilde{Z}^{\frac{1}{1-\rho}}]\geq E[\widetilde{X}] by the independent case.
    Follow Math Help Forum on Facebook and Google+

  8. #8
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    By studying your proof, if found you can state it in a more natural way (or so I think). Here's how I would do (in case this is of any interest to you).

    Aim: proving E[X|X\geq Y]\geq E[X] when X,Y are independent real random variables such that X is integrable.

    Lemma:
    for any given y\in\mathbb{R}, E[X|X\geq y]\geq E[X].

    This is equivalent to E[X{\bf 1}_{(X\geq y)}]\geq E[X]P(X\geq y). We consider two cases (and apply simple Markov-like inequalities):
    If y\geq E[X], then E[X{\bf 1}_{(X\geq y)}]\geq y P(X\geq y)\geq E[X]P(X\geq y).
    If y<E[X], then E[X{\bf 1}_{(X\geq y)}]=E[X]-E[X{\bf 1}_{(X<y)}]\geq E[X]-yP(X<y) >E[X]-E[X]P(X<y)=E[X]P(X\leq y).
    This proves the lemma.

    Now, it suffices to integrate the equation E[X{\bf 1}_{(X\geq y)}]\geq E[X]P(X\geq y) with respect to P_Y(dy) (the distribution of Y). This gives E[X{\bf 1}_{(X\geq Y)}]\geq E[X]P(X\geq Y). Indeed, since X and Y are independent, we have E[f(X,Y)]=\int E[f(X,y)]P_Y(dy) for any function f such that f(X,Y) is integrable. (And the right-hand side writes E[X]P(X\geq y)=E[E[X]{\bf 1}_{(X\geq y)}] so this property applies)
    Finally, the inequality rewrites as E[X|X\geq Y]\geq E[X]. End of the proof.
    Follow Math Help Forum on Facebook and Google+

  9. #9
    Newbie
    Joined
    Jan 2009
    Posts
    7
    Cool! Very good job.
    I guess that your version of the proof for the independence case shows that I am not a mathematician, nor a statistician. In the proof that I posted I had to write down the integrals to be sure that proof was correct and admittedly made a strange tour to get to the point. I must have been drunk. Instead, you went to the point directly and, in addition, your notation makes it extremely simple. (I should learn it and use it more often.)

    In fact, it is even simpler because you do not even need to prove the lemma (it is a well known result; in the book of econometrics that I am studying it is there with the name of "truncation"). Basically, the result with independent variables is a generalization of the truncation: it generalizes truncation to Y non degenerated, but yet independent from X. For non-independent variables, instead, the result does not necessarily hold (... if you were curious to know it). It easy to write down counterexamples. This is why I was curious to check what happened for some families. (I have proved it also for bivariate Frechet, Weibull, and Pareto.)

    Your proof for the lognormal uses a trick similar to the one that I had used for the normal. Again, I was so dumb because I wanted to show it by making calculations, but then I crushed my head on the funtions lambda and delta ... too complicated for me to handle them. Following your example, now I have proved it also using the functions U and V that I had used in my post n. 3.

    Well, if you visit Rome, one beer is on me. Ciao
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. conditional expected value of bivariate normal
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: December 11th 2009, 10:06 AM
  2. conditional probability with bivariate normal dist. help
    Posted in the Advanced Statistics Forum
    Replies: 2
    Last Post: November 21st 2009, 08:09 PM
  3. lognormal distribution
    Posted in the Advanced Statistics Forum
    Replies: 8
    Last Post: May 24th 2009, 02:50 AM
  4. Lognormal PDF
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: October 4th 2008, 03:45 PM
  5. Lognormal distribution
    Posted in the Advanced Statistics Forum
    Replies: 2
    Last Post: May 11th 2008, 11:32 PM

Search Tags


/mathhelpforum @mathhelpforum