I know that if and are independent, then . Now I am trying to prove the result with some correlated distributions. I have a very simple proof if is bivariate normal but I am struggling to prove it for bivariate lognormal. Any suggestion?
I know that if and are independent, then . Now I am trying to prove the result with some correlated distributions. I have a very simple proof if is bivariate normal but I am struggling to prove it for bivariate lognormal. Any suggestion?
Let me try to make the problem more "intriguing". The inequality does not hold in general (it is very easy to build counterexamples). BUT it always holds for independent variables. Here is a sketch of a proof:
For any variables and it holds that
Now consider that, for any constant :
By integrating with respect to the distribution of , and using independence, I get to the point.
(A rigorous proof is somewhat longer, but at least you see the way to get there)
Now can we prove my inequality for some families of correlated variables?
With the normal we can (a proof is coming right away)
Here is the second part.
If is a bivariate normal distribution with the mean and variance of equal to and with variables having same variances (equal to ) and a correlation given by , then it is easy to verify that with and is bivariate normally distributed and and are independent from each other (I find that this is so cool!). Then you can write: , as , and, using the properties of the truncated normal, you are done. The result is (if there are not typos):
where
and is the hazard function of the normal distribution
(so it follows )
So from here we get to my question that was: is it possible that the result holds for the bivariate normal, but not for the bivariate lognormal?
So here is what I have done so far with the lognormal.
If , , and is ( ) - where I have taken same variances, as in the previous post - then, by definition, , is a bivariate lognormal distribution. I want to show that . To do it, it is easier to show that:
Now and are both lognormal and their means are (if I have not made mistakes with the truncated normal):
, and
, with:
where:
is the hazard funtion of the standard normal (that is: )
Now the problem is clear. With the conditioning the mean of the underlying normal increases, but the variance decreases, and assessing the net effect is not easy.
The inequality clearly holds if (that is: ), but what if ??
Hey Mr. Fantastic I need your help with those nasty lambda and delta functions. I am sure you can handle them.
Hi,
what do you think of the following?:
Suppose is a Gaussian random vector, with common variance and correlation .
Then , so that and are independent (since we're dealing with a Gaussian r.vector). Notice that is equivalent to , so that we get:
,
and by the result in the case of independent random variables we deduce .
All the more, this works similarly for lognormal random variables: is equivalent to , or to (since ) and the r.v. on the two sides are independent.
By the way, I'd be interested to see the full proof of the case of independent r.v. (or have a reference for that): how do you go from "for all , " to " "?
Your argument for the normal is extremely elegant. I like it a lot.
I have not understood how to work out the lognormal, however. I am really confused. The and you wrote on those lines, do I have to read that as and as ? Because if it so, then the argument would not work because and have different variances, so you would need a different rescaling for making them independent (but I things might get complicated). I think I am missing something, sorry about that.
I attach an argument for the conditional mean under independence. It is not very nice, sorry. I think the proof is a little longer than what could be.
Attachment 9904
By the way, I could not find any reference on any book about that result, so should you find a reference, I would appreciate if you could let me know.
Thanks for the proof, I'll read it right away.
For the log-normal, here's what I meant: is like before, while the bivariate log-normal is . Like before, and are independent, so that and are independent as well. In addition, iff (this is what I wrote, with different notations). As a consequence, by the independent case.
By studying your proof, if found you can state it in a more natural way (or so I think). Here's how I would do (in case this is of any interest to you).
Aim: proving when are independent real random variables such that is integrable.
Lemma: for any given , .
This is equivalent to . We consider two cases (and apply simple Markov-like inequalities):
If , then .
If , then .
This proves the lemma.
Now, it suffices to integrate the equation with respect to (the distribution of ). This gives . Indeed, since and are independent, we have for any function such that is integrable. (And the right-hand side writes so this property applies)
Finally, the inequality rewrites as . End of the proof.
Cool! Very good job.
I guess that your version of the proof for the independence case shows that I am not a mathematician, nor a statistician. In the proof that I posted I had to write down the integrals to be sure that proof was correct and admittedly made a strange tour to get to the point. I must have been drunk. Instead, you went to the point directly and, in addition, your notation makes it extremely simple. (I should learn it and use it more often.)
In fact, it is even simpler because you do not even need to prove the lemma (it is a well known result; in the book of econometrics that I am studying it is there with the name of "truncation"). Basically, the result with independent variables is a generalization of the truncation: it generalizes truncation to Y non degenerated, but yet independent from X. For non-independent variables, instead, the result does not necessarily hold (... if you were curious to know it). It easy to write down counterexamples. This is why I was curious to check what happened for some families. (I have proved it also for bivariate Frechet, Weibull, and Pareto.)
Your proof for the lognormal uses a trick similar to the one that I had used for the normal. Again, I was so dumb because I wanted to show it by making calculations, but then I crushed my head on the funtions lambda and delta ... too complicated for me to handle them. Following your example, now I have proved it also using the functions U and V that I had used in my post n. 3.
Well, if you visit Rome, one beer is on me. Ciao