Originally Posted by

**Laurent** What is your definition of conditional expectation? Intuitively, the conditional expectation of $\displaystyle Y$ given $\displaystyle X$ is the function of $\displaystyle X$ which is "closest" to $\displaystyle Y$. In the case of $\displaystyle E[g(X)|X]$, $\displaystyle g(X)$ is itself a function of $\displaystyle X$, and it is of course the "closest" to $\displaystyle g(X)$, so that $\displaystyle E[g(X)|X]=g(X)$. The formal proof shouldn't be difficult, whatever definition you have; give us the definition you have if you want a more precise solution. (In fact, there's only one definition, but it may well be that you know a definition relative to a particular case, that's why I don't give the general answer yet)

The usual integration by part works if you can differentiate $\displaystyle F$, which is only possible if the distribution of $\displaystyle X$ is continuous.

In fact there's a more general integration by parts formula for functions with bounded variation, but you probably don't know it.

Then for the general case, you can procede as follows: write $\displaystyle 1-F(x)=P(X>x)=E[{\bf 1}_{\{X>x\}}]$ (this is an indicator function in the expectation, it equals 1 on the event in the subscript, and 0 otherwise), and then by Fubini $\displaystyle \int_0^\infty (1-F(x))dx=\int_0^\infty E[{\bf 1}_{\{X>x\}}] dx= E\left[\int_0^\infty {\bf 1}_{\{X>x\}} dx\right] = E[X]$ (we integrate 1 when $\displaystyle 0<x<X$ and 0 otherwise).

Remark: if $\displaystyle X$ is discrete, then $\displaystyle F$ has "steps", so that the integral could be rewritten as a sum.

Other remark: you wondered if $\displaystyle x(1-F(x))\to 0$ when $\displaystyle x\to\infty$. This is true if $\displaystyle E[X]<\infty$. Indeed, we have $\displaystyle x (1-F(x))=xP(X>x)\leq E[X\, {\bf 1}_{\{X>x\}}]$ (this may be called Markov inequality, slightly refined), and by the bounded convergence theorem the right-hand side converges to 0.