# Thread: Problems with multiple variables

1. ## Problems with multiple variables

Let X1, X2, X3, be random variables (assume independance) with values 1 and 3 and have probability 1/2

Suppose Y1 = X2 x X3 + X1 and Y2 = X1 x X2 - X3

Calculate the joint probability density function fx,y (y1,y2) also the mean, variance and covariance, marginal pdf of (y1,y2)
also conditional distribution and conditional mean of Y1 given Y2=0

Its the first part that I struggle with the most, im just really not sure what its suppose to mean, do all the variables X1, X2, X3 have probabilty 1/2? does that mean Y1 = 1 and Y2 = -0.25?
also what does the 'with values 1 and 3 bit mean?'
I dont know what to make of the joint p.d.f either, f(x,y) (xj,yj) = P(X=xj,Y=yj) and the marginal pdf looks scary from here but im sure will make more sence once I get a handle on things.

2. Originally Posted by SilenceInShadows
Let X1, X2, X3, be random variables (assume independance) with values 1 and 3 and have probability 1/2

Suppose Y1 = X2 x X3 + X1 and Y2 = X1 x X2 - X3

Calculate the joint probability density function fx,y (y1,y2) also the mean, variance and covariance, marginal pdf of (y1,y2)
also conditional distribution and conditional mean of Y1 given Y2=0

Its the first part that I struggle with the most, im just really not sure what its suppose to mean, do all the variables X1, X2, X3 have probabilty 1/2? does that mean Y1 = 1 and Y2 = -0.25?
also what does the 'with values 1 and 3 bit mean?'
I dont know what to make of the joint p.d.f either, f(x,y) (xj,yj) = P(X=xj,Y=yj) and the marginal pdf looks scary from here but im sure will make more sence once I get a handle on things.
My interprettation would be that $\displaystyle \Pr(X_i = 1) = \frac{1}{2}$ and $\displaystyle \Pr(X_i = 3) = \frac{1}{2}$ for i = 1, 2, 3.

3. I think Mr F. is exactly right. I think the wording is poor but I think there is really only 1 interpretation.

Let X1, X2, X3, be random variables (assume independance) with values 1 and 3 and have probability 1/2
Another way of stating it might be something like this.
Let X1, X2, X3, be random variables (assume independence). Each one can take on values of either 1 or 3 with equal probability (i.e. 1/2).

Is that better?

As far as the joint PDF, etc is concerned, probably the easiest way to go on this is to make a table with all the possible outcomes of $\displaystyle X_1,X_2,X_3,p,Y_1,Y_2$ where $\displaystyle p$ is the probability of the combination of $\displaystyle X_1,X_2,X_3$.

For example, the first row might be $\displaystyle 1,1,1,(1/2)^3,2,0$. If you fill out the table completely (should have $\displaystyle 2^3$ rows) then you can add up the probabilities $\displaystyle p$ where $\displaystyle Y_1=A\text{ and }Y_2=B$ for all possible combinations of $\displaystyle A\text{ and }B$ that you see in the $\displaystyle Y_1,Y_2$ columns

4. Thats more helpful yes thankyou, how do I go about applying that to the question? I dont understand how to apply the information to the formulas that describe the joint pdf etc...

It also tells me to display my answer from the joint pdf in a table.

5. I edited my previous post with the info that you are asking for, I think.

6. Am I to put the possible combinations of 1 and 3 in the first 3 columns as the rows so to speak? the probability will stay fix but the values of Y1 and Y2 will change?

1 1 1 2 0
1 1 3 4 -2
1 3 1 4 2
1 3 3 10 0
3 1 1 4 2
3 1 3 6 0
3 3 1 6 8
3 3 3 12 6

OK, so that is what I make the table to be. Grouping the same Y1 Y2 values together gives:

2 0 0.125
4 -2 0.125
10 0 0.125
4 2 0.25
6 0 0.125
6 8 0.125
12 6 0.125

would that be an answer?

7. Yep!

8. Ok, the next step is the marginal pdf for Y1 and Y2 which are given by this scary formula:
P(Y1 = y1i ) = P(Y1 = y1, -infinity < Y2 < infinity = Sum P(Y1 = y1i, Y2 = y2i)
(I really must learn that maths type font thing...)

Erm yes, I really dont get the infinities thing, Is it asking to sum all posible values of Y2 and calculate thier probability or something.

9. Originally Posted by SilenceInShadows
Ok, the next step is the marginal pdf for Y1 and Y2 which are given by this scary formula:
P(Y1 = y1i ) = P(Y1 = y1, -infinity < Y2 < infinity = Sum P(Y1 = y1i, Y2 = y2i)
(I really must learn that maths type font thing...)

Erm yes, I really dont get the infinities thing, Is it asking to sum all posible values of Y2 and calculate their probability or something.
Don't be afraid of infinity, it's just an 8 that fell over. Ok, bad joke. Basically when it appears as a bounds (upper or lower) in a sum or integral then it just means unbounded.

Another way of writing the same thing is:

$\displaystyle \mathbf{P}(Y_1 = y1) = \sum_{y2_i\in \mathbf{Y_2}}\mathbf{P}(Y_1 = y1 \text{ and }Y_2 = y2_i)$ where $\displaystyle \mathbf{Y_2}$ is the set of all possible outcomes of $\displaystyle Y_2$

Another way of saying it is that to find the $\displaystyle \mathbf{P}(Y_1 = y1)$ all you have to do is add up all the probabilities of the outcomes from the joint PDF in which $\displaystyle Y_1 = y1$.

By the way, to see the code that created these formulas, all you have to do is click on one. It's basically just Latex.

10. Im still quite confused about what its asking me to do, if some body could take the time to explain further that would be much appriated.

Also if anybody can recommend a book/link me to any worked through examples then that would be so amazingly helpful.

$\displaystyle \mathbf{P}(Y_1 = y1) = \sum_{y2_i\in \mathbf{Y_2}}\mathbf{P}(Y_1 = y1 \text{ and }Y_2 = y2_i)$

First a comment about marginal distributions. Really what this is saying is that you (or your teacher or book) want to calculate a probability distribution for a random variable, and you are to calculate it from a joint PDF. That's it. You just want to calculate the PDF for, in this case $\displaystyle Y_1$. (also for $\displaystyle Y_2$)
You calculated the joint pdf in the table:
2 0 0.125
4 -2 0.125
10 0 0.125
4 2 0.25
6 0 0.125
6 8 0.125
12 6 0.125

Let's start with the marginal PDF of $\displaystyle Y_1$
So what are the outcomes of $\displaystyle Y_1$? Ans: 2,4,6,10,12

$\displaystyle \Pr(Y_1=2) = 0.125$
$\displaystyle \Pr(Y_1=4) = \Pr(Y_1=4\text{ and }Y_2=-2)+\Pr(Y_1=4\text{ and }Y_2=2)=0.25+0.125$$\displaystyle = 0.375 and so on. You get the following table: \displaystyle Y_1\ |\ \Pr(Y_1=y_1) 2, 0.125 4, 0.375 6, 0.25 10, 0.125 12, 0.125 Try doing the \displaystyle Y_2 case. Make better sense? Now the conditional distributions are a little different. Conditionals say, given you know that \displaystyle Y_2=y_2, what is the distribution of \displaystyle Y_1. This the conditional distribution of \displaystyle Y_1 conditioned on \displaystyle Y_2. More specifically \displaystyle \Pr(Y_1=y_1|Y_2=y_2)=\Pr(Y_1=y_1\text{ and }Y_2=y_2)/\Pr(Y_2=y_2)\hspace{1cm}(1) Here are the columns for the table: \displaystyle Y_2\ |\ \Pr(Y_1=2|Y_2)$$\displaystyle \ |\ \Pr(Y_1=4|Y_2)\ |\ \Pr(Y_1=6|Y_2)$$\displaystyle \ |\ \Pr(Y_1=10|Y_2)\ |\ \Pr(Y_1=12|Y_2)$

Let's do two rows in detail:
for $\displaystyle Y_2=-2$ The denominator of (1) for the first row is$\displaystyle \Pr(Y_2 = -2) = 0.125$ which I got from the (marginal) distribution of $\displaystyle Y_2$. The $\displaystyle \Pr(Y_1 = 4 \text{ and } Y_2=-2) = 0.125$ with all the other possible values of $\displaystyle Y_1$ having probability 0. So $\displaystyle \Pr(Y_1=4|Y_2=-2)=0.125/0.125 = 1$ with the rest being 0.
-2, 0, 1, 0, 0, 0
Next row:
for $\displaystyle Y_2=0$ The denominator of (1) for this row is $\displaystyle \Pr(Y_2 = 0) = 0.375$ from the (marginal) distribution of $\displaystyle Y_2$. The $\displaystyle \Pr(Y_1 = 2 \text{ and } Y_2=0) = \Pr(Y_1 = 6 \text{ and } Y_2=0)=\Pr(Y_1 = 10 \text{ and } Y_2=0)=0.125$, with all the other possible values of $\displaystyle Y_1$ having probability 0. So $\displaystyle \Pr(Y_1=3|Y_2=0)=0.125/0.375 = 1/3$, and so on.

0, 1/3, 0 1/3, 1/3, 0 [note that sum of the probabilities is 1.]
2, 0, 1, 0, 0, 0
6, 0, 0, 0, 0, 1
8, 0, 0, 1, 0, 0

Step back a moment, and let's read the info you have tabulated.
First I ask, what is the probability that $\displaystyle Y_1=6$? From the (marginal) PDF you can answer 0.25. Now I say, suppose you know that $\displaystyle Y_2=0$, now what is the probability that $\displaystyle Y_1=6$? Here you look at the conditional table. Now the answer is 1/3. Notice it is different. Knowing something about $\displaystyle Y_2$ tells you something about $\displaystyle Y_1$.

Make sense?

12. It seems to me that the mean of Y1 could be as simple as 6?
ie
(2+4+4+4+6+6+10+12)/8?
making the variance to be 10?
(4+3x16+2x36+100+144)/8 - 36?
could some one confirm/correct me as to whether that is the right approach? Also if the question asked me to calcualte the covariance of Y1 and Y2 then do I need to calcualte Cov[Y1, Y2] given by E[(Y1-E[y2])(Y2-E[y1]) ?
The only remaining parts of the question after this is the conditional mean of Y1 given Y2=0 and if Y1 and y2 are independent or not.

Many Thanks Silence

13. Originally Posted by SilenceInShadows
It seems to me that the mean of Y1 could be as simple as 6?
ie
(2+4+4+4+6+6+10+12)/8?
making the variance to be 10?
(4+3x16+2x36+100+144)/8 - 36?
could some one confirm/correct me as to whether that is the right approach? [snip]
Sorry but this is completely wrong because the possible values of Y1 have different probabilities of occuring ......

If you had a random variable A such that Pr(A = 0) = 0.001 and Pr(A = 10) = 0.999 would you calculate the mean of A to be (0 + 10)/2 = 5 .....?

You must have met the formula $\displaystyle E(X) = \sum_{i=1}^n x_i \, \Pr(X = x_i) \, ....$

It is worrying that you would think this at this level. I think you would be wise to thoroughly review all of the basic definitions and concepts.

Originally Posted by SilenceInShadows
[snip]
Also if the question asked me to calcualte the covariance of Y1 and Y2 then do I need to calcualte Cov[Y1, Y2] given by E[(Y1-E[y2])(Y2-E[y1]) ?
[snip]
The correct formula is $\displaystyle \text{Cov} (Y_1 , Y_2) = E[(Y_1-E[{\color{red}Y_1}]) \, (Y_2-E[{\color{red}Y_2}])]$.

Since this is one of the well known formulas for calculating Cov[Y1, Y2] and since your previous calculations should have given you all of the data required for its use, .......

Nevertheless, I think you'll find the alternative formula $\displaystyle \text{Cov} (Y_1 , Y_2) = E(Y_1 Y_2) - E(Y_1) E(Y_2)$ more computationally efficient.

Originally Posted by SilenceInShadows
[snip]
The only remaining parts of the question after this is the conditional mean of Y1 given Y2=0 and if Y1 and y2 are independent or not.
Using the data provided by your previous calculations, I think you should be able to answer these two questions yourself after a thorough review of the relevant basic concepts and definitions.

14. $\displaystyle E(X) = \sum_{i=1}^n x_i \, \Pr(X = x_i) \, ....$

Using the above table:

2, 0.125
4, 0.375
6, 0.25
10, 0.125
12, 0.125

so, I would make that to be:

2x0.125+4x0.375+6x0.25+10x0.125+12x0.125?

Ok, so moving on to independent/dependent...

Events Y1,Y2 are independent iff:

P(Y1 intersection Y2) = P(Y1)P(Y2)
I think that they are therefore dependent as there are no events where 2 number appear in at the same time. Therefore the interestion of Y1 and Y2 is 0?

15. Originally Posted by SilenceInShadows
$\displaystyle E(X) = \sum_{i=1}^n x_i \, \Pr(X = x_i) \, ....$

Using the above table:

2, 0.125
4, 0.375
6, 0.25
10, 0.125
12, 0.125

so, I would make that to be:

2x0.125+4x0.375+6x0.25+10x0.125+12x0.125?

Mr F says: Correct.

Ok, so moving on to independent/dependent...

Events Y1,Y2 are independent iff:

P(Y1 intersection Y2) = P(Y1)P(Y2)
I think that they are therefore dependent as there are no events where 2 number appear in at the same time. Therefore the interestion of Y1 and Y2 is 0?
To prove dependence on the basis of calculations you've already done, it's sufficient to show that Pr(Y1 = y1 | Y2 = y2) is different from Pr(Y1 = y1) ....