Conditional probability density function, practical example. Is it right?

We have 2 independent random variables X and Y.

X has probability density function f1(X).

Y has probability density function f2(X).

Z = X + Y is the sum of both variables.

What is probability density function of X knowing Z, thus f1(X|Z)?

Hereafter I expose what I propose.

Here is the probability density function of the sum of X and Y

(convolution):

f(X+Y) = f(Z) = f(z) = integral { - inf to + inf } f1(x) f2(z - x) dx

P(A|B) = (P(A) P(B|A)) / P(B) (Bayes theorem)

f1(x|z) dx = (f1(x) dx f2(z-x) dz) / (f(z) dz)

f1(x|z) = (f1(x) f2(z-x)) / f(z)

Is it the right result?

Is my proof sufficiently rigorous?

Re: Conditional probability density function, practical example. Is it right?

Hey Baillya.

One suggestion I have for you is when you have conditional probabilities or conditional moments, you want to specify your conditioning variable in terms of something that is specific.

So instead of f(X|Z), you need to have a specific subset of Z whether it be a specific value like Z=z or some custom subset like Z < 0 or (Z > 4 AND Z < 5) or something else. Without this information, it is essentially just a normal joint distribution and not really something is conditional per se.

So for your f(Z), when you look at a conditional probability where Z has a known subset and this probability will give you something that is explicit (i.e. you can evaluate it to get a probability between 0 and 1 inclusive) and this is used to get rid of variables involving z to give a distribution for x constrained on some subset for z which should give a pdf with only x in it.

To change this let change the f(Z) to f(Z = z) where z is some subset of the whole space associated with the random variable Z. If this subset is a simple region then you represent this with an integral (or a summation for discrete) and this can be evaluated to a number, and handle the other marginal distribution for f(X|Z = z) accordingly.

It is a subtle point but it's important because each distribution will have some kind of variation and the minute you go from many degrees of variation to less, then the PDF has to reflect this and if you don't reflect this then it means that you will probably screw the rest of the analysis up.

Every time you condition a random variable you are looking at a specific subset of that random variable given some condition and this becomes a new distribution of its own: it's like drawing a Venn diagram that is a big circle and drawing a smaller circle within it: the large circle is unconstrained (or less constrained) and the small circle is constrained representing your conditional distribution.

Re: Conditional probability density function, practical example. Is it right?

OK Chiro, thanks for your comment. I should not write f1(X|Z). I should write f1(x|z) or f1(x|z=z1). But what about my question?

Re: Conditional probability density function, practical example. Is it right?

One thing is that you only have functions: no infinitesimals like dx or dz.

The equations look right but again with conditional probabilities, you need to specify some kind of "condition" for the variables to the right of the "|" (i.e. the pipe) sign.

I have a feeling that you are not quite understanding the probability properly since you are adding dx's and dz's into the mix.

The probability of a continuous variable provides a "height" for a given value and it satisfies being a PDF but in terms of a probability, you need to calculate a non-zero interval to get an actual probability.

The dx's and dz's are just used to add up the probabilities as if they were rectangles with a width approaching zero exactly like the area of a curved shape is done by using integration.

But density functions are just normal functions not integrals: you are looking to get the PDF of a particular conditional random variable which is just a function. If you want to get a probability, then you need to integrate over the appropriate region for a continuous random variable, or sum over the correct region for a discrete random variable and if this is a continuous random variable, then this involves integration.

But the PDF is just a function with no dx's or dz's since you are not integrating anything.

So when you are talking about conditional probabilities of P(X|Z) or P(Z|X), try and make sure you talk about specific regions for the variables being conditioned on: it doesn't make sense otherwise and it will confuse you even more when you go to do analysis. Remember that in conditioning, we know about the variables being conditioned and this creates a simpler random variable that is more constrained.

So for P(X|Z) you have P(X|Z = z_region) and for P(Z|X) you have P(Z|X = x_region) where x_region is some set of values belonging to the random variable X and same for z_region belongong to Z.

If z_region is only one value we typically write something like P(X|Z=a) but if it's an arbitary value then z_region becomes an arbitrary set.

Re: Conditional probability density function, practical example. Is it right?

Thanks Chiro,

I can give a specific condition at the right side of the "|". For example z = z1 = 1000. If f1 and f2 are symetric then the mean of f1 is 0 and the mean of f2 is 0. Intuitively the mean of f1(x|z=z1=1000) should be around 500. I think that the conditional probability function f1(x|z=z1=1000) makes sense without any further precision.

How can I prove that f1(x|z) = (f1(x) f2(z-x)) / f(z) without infinitesimals like dx or dz?

Re: Conditional probability density function, practical example. Is it right?

They key thing to remember is that you are looking at a PDF, not any kind of summation of integral over some set.

Recall that when you do Integral (a to b) f(x)dx, if f(x) is a PDF then you are finding the probability that the random variable X is between a and b.

The PDF however is not the integral: you are looking at two completely different things.

If a probability is constant (and not a function of some variable), then that's a special situation: but a probability in general can be a function of other variables in which a constant is a very special case.

If you want to calculate actual probabilities: that is when you actually do a summation if its a discrete variable or an integral if its a continuous variable: the P(R) is just a PDF, but it does not actually give a probability unless you constrain that R to some specific subset of the domain of the random variable.

Remember how I said that for P(X|Z) you need to specify a region for Z so that you get a probability that represents only one random variable: well to get a probability for a specific subset of this new random variable you have to give a subset to calculate an actual probability.

So I discussed P(X|Z=z_region) and this will reduce to a special univariate random variable (call it R) with a PDF given by P(X|Z=z) where x in this case when you simplify everything becomes an r (This is just to make the notation easier).

Now if you want to calculate an actual probability, you need to have a region for this new random variable R (Call it r_region) and the probability is P(R is in r_region) and if this a continuous random variable, you integrate over r_region with the right limits. An example is if r_region is everything in between r = 0 and r = 1, then the probability is P(0 < R < 1) and this is Integral (0 to 1) r(x)dx if R is a one-dimensional random variable.

There is a big difference between P(X|Z=z_region) and P(X=x_region|Z=z_region).

Re: Conditional probability density function, practical example. Is it right?

Dear Chiro,

I posted the problem on more than one forum.

It seems that the solution is a trivial use of the Bayes theorem for random variable (see Bayes' theorem - Wikipedia, the free encyclopedia).

Then you get immediately back the solution that I proposed.

I wonder if your obscure phraseology is worth much but anyway thanks for the effort.