Thread: Two random variables equal in distribution?

1. Two random variables equal in distribution?

Let X1,X2,X3,Y1,Y2,Y3 be random variables.
If X1 and Y1 have the same distribution,
X2 and Y2 have the same distribution,
X3 and Y3 have the same distribution,
then is it true that X1+X2+X3 and Y1+Y2+Y3 will have the same distribution? Why or why not?

Any help is appreciated!

2. Hello,

I don't think it is. Maybe you can have a look at gaussian vectors, with covariance matrices such that the diagonal is the same, but not the numbers off the diagonal.
(and with the same expectation vector)

3. (X1,X2,X3) and (Y1,Y2,Y3) would have to have the same joint distribution.

4. Let's imagine the Gaussian vector $\displaystyle M=(X_1,X_2,X_3)$ with the expectation vector $\displaystyle \mu\in\mathbb{R}^3$ and the covariance matrix $\displaystyle K=\begin{pmatrix} a_{11}^2 & a_{12} & a_{13} \\ a_{12} & a_{22}^2 & a_{23} \\ a_{13} & a_{23} & a_{33}^2 \end{pmatrix}$

and then the Gaussian vector $\displaystyle N=(Y_1,Y_2,Y_3)$ with the expectation vector $\displaystyle \mu$ and the covariance matrix $\displaystyle L=\begin{pmatrix} a_{11}^2 & 3a_{12}+1 & a_{13} \\ 3a_{12}+1 & a_{22}^2 & a_{23} \\ a_{13} & a_{23} & a_{33}^2 \end{pmatrix}$

we obviously have $\displaystyle X_i\sim Y_i ~ \forall i \in\{1,2,3\}$, though since the covariance matrices are different, the pdf of M is different from the pdf of N...

Am I wrong somewhere ?

5. Originally Posted by kingwinner
Let X1,X2,X3,Y1,Y2,Y3 be random variables.
If X1 and Y1 have the same distribution,
X2 and Y2 have the same distribution,
X3 and Y3 have the same distribution,
then is it true that X1+X2+X3 and Y1+Y2+Y3 will have the same distribution? Why or why not?

Any help is appreciated!
Let $\displaystyle X$ be a random variable with a symmetric non-trivial distribution, like $\displaystyle P(X=1)=P(X=-1)=1/2$ or a centered Gaussian.

Let $\displaystyle (X_1,X_2,X_3)=(X,X,X)$ and $\displaystyle (Y_1,Y_2,Y_3)=(X,-X,X)$. Then the hypothesis is fulfilled, while $\displaystyle X_1+X_2+X_3=3X$ and $\displaystyle Y_1+Y_2+Y_3=X$ don't have the same distribution.

6. Originally Posted by Laurent
Let $\displaystyle X$ be a random variable with a symmetric non-trivial distribution, like $\displaystyle P(X=1)=P(X=-1)=1/2$ or a centered Gaussian.

Let $\displaystyle (X_1,X_2,X_3)=(X,X,X)$ and $\displaystyle (Y_1,Y_2,Y_3)=(X,-X,X)$. Then the hypothesis is fulfilled, while $\displaystyle X_1+X_2+X_3=3X$ and $\displaystyle Y_1+Y_2+Y_3=X$ don't have the same distribution.
I see!
But as matheagle suggested, if (X1,X2,X3) and (Y1,Y2,Y3) have the same JOINT distribution, then X1+X2+X3 and Y1+Y2+Y3 would have the same distribution, right?

7. Originally Posted by kingwinner
I see!
But as matheagle suggested, if (X1,X2,X3) and (Y1,Y2,Y3) have the same JOINT distribution, then X1+X2+X3 and Y1+Y2+Y3 would have the same distribution, right?
Right.

8. Originally Posted by Laurent
Right.
I see.
I am sorry...the following may seem very obvious to you, but to me it is not

Statement 1:
X1 and Y1 have the same distribution, AND
X2 and Y2 have the same distribution.

Statement 2:
(X1,X2) and (Y1,Y2) have the same JOINT distribution.

Are statements 1 and 2 equivalent? If not, what is the difference between them? (please explain in the simplest terms if possible as I am only a 2nd year stat undergrad student)

I was never able to understand this, and I would really appreciate if you could clarify this concept.

9. Originally Posted by kingwinner
I see.
I am sorry...the following may seem very obvious to you, but to me it is not

Statement 1:
X1 and Y1 have the same distribution, AND
X2 and Y2 have the same distribution.

Statement 2:
(X1,X2) and (Y1,Y2) have the same JOINT distribution.

Are statements 1 and 2 equivalent? If not, what is the difference between them? (please explain in the simplest terms if possible as I am only a 2nd year stat undergrad student)

I was never able to understand this, and I would really appreciate if you could clarify this concept.
My humble attempt at a clarification :

Plainly, the joint distribution of (X,Y) tells you $\displaystyle P(X\in A,Y\in B)$ for any subsets A,B.
Then, if you take $\displaystyle B=\mathbb{R}$, it gives you $\displaystyle P(X\in A)$ for any subset A, which is the distribution of X. So, at least, when you know the joint distribution of (X,Y), you know the distributions of X and Y.
But you know much more. For instance, X and Y are independent iff $\displaystyle P(X\in A,Y\in B)=P(X\in A)P(Y\in B)$, and this condition only involves the joint distribution. So the joint distribution tells you if X and Y are independent.
More generally, it contains the way the values of X and Y relate to each other. The very fact that X=Y (almost surely) can be read from the joint distribution, while it is not readable from the distributions of X and Y. For the same distribution $\displaystyle \mu$, there are many variables (X,Y) such that X and Y have distribution $\displaystyle \mu$; extreme cases are X=Y of law $\displaystyle \mu$, and X,Y independent of law $\displaystyle \mu$.
Maybe if will be clearer if you think that the joint distribution not only tells you the distribution of X but also the conditional distribution of X given Y: $\displaystyle P(X=k|Y=l)=\frac{P(X=k,Y=l)}{P(Y=l)}$ (the right-hand side depends only on the joint distribution).

A "visual" way: The joint distribution is a probability measure on $\displaystyle \mathbb{R}^2$ that describes how the values of $\displaystyle (X,Y)$ are distributed in the plane. You can think of hot spots (or peaks) where the measure gives more probability, and it gets colder and colder at infinity (nearer to 0). Then for instance you may have some very hot spot near (1,2), which means that $\displaystyle (X,Y)$ has high probability to be near that point, i.e. with high probability X is near 1 and at the same time Y is near 2.
Now, one can see the distributions of X and Y in this setting: they are distributions on each of the axes obtained by averaging the measure on the whole line projecting to the chosen point of the axis. For instance, P(X=x) is obtained by averaging the previous measure on the (vertical) line of equation "X=x"; like a "projection" of the measure. If there was at hot spot at (1,2), then there will be a hot spot at x=1 as well by projection, and at y=2.
But if there are for instance a hot spot at (1,2) and another at (3,4), you will have two spots at 1 and 3 for X, and at 2 and 4 for Y. In that case, you don't know if (1,4) is a likely spot for (X,Y) or not from the distributions of X and Y.

I don't know if this has clarified anything... You'll probably get used to it and understand the concept progressively.

10. Originally Posted by Laurent
My humble attempt at a clarification :

Plainly, the joint distribution of (X,Y) tells you $\displaystyle P(X\in A,Y\in B)$ for any subsets A,B.
Then, if you take $\displaystyle B=\mathbb{R}$, it gives you $\displaystyle P(X\in A)$ for any subset A, which is the distribution of X. So, at least, when you know the joint distribution of (X,Y), you know the distributions of X and Y.
But you know much more. For instance, X and Y are independent iff $\displaystyle P(X\in A,Y\in B)=P(X\in A)P(Y\in B)$, and this condition only involves the joint distribution. So the joint distribution tells you if X and Y are independent.
More generally, it contains the way the values of X and Y relate to each other. The very fact that X=Y (almost surely) can be read from the joint distribution, while it is not readable from the distributions of X and Y. For the same distribution $\displaystyle \mu$, there are many variables (X,Y) such that X and Y have distribution $\displaystyle \mu$; extreme cases are X=Y of law $\displaystyle \mu$, and X,Y independent of law $\displaystyle \mu$.
Maybe if will be clearer if you think that the joint distribution not only tells you the distribution of X but also the conditional distribution of X given Y: $\displaystyle P(X=k|Y=l)=\frac{P(X=k,Y=l)}{P(Y=l)}$ (the right-hand side depends only on the joint distribution).

A "visual" way: The joint distribution is a probability measure on $\displaystyle \mathbb{R}^2$ that describes how the values of $\displaystyle (X,Y)$ are distributed in the plane. You can think of hot spots (or peaks) where the measure gives more probability, and it gets colder and colder at infinity (nearer to 0). Then for instance you may have some very hot spot near (1,2), which means that $\displaystyle (X,Y)$ has high probability to be near that point, i.e. with high probability X is near 1 and at the same time Y is near 2.
Now, one can see the distributions of X and Y in this setting: they are distributions on each of the axes obtained by averaging the measure on the whole line projecting to the chosen point of the axis. For instance, P(X=x) is obtained by averaging the previous measure on the (vertical) line of equation "X=x"; like a "projection" of the measure. If there was at hot spot at (1,2), then there will be a hot spot at x=1 as well by projection, and at y=2.
But if there are for instance a hot spot at (1,2) and another at (3,4), you will have two spots at 1 and 3 for X, and at 2 and 4 for Y. In that case, you don't know if (1,4) is a likely spot for (X,Y) or not from the distributions of X and Y.

I don't know if this has clarified anything... You'll probably get used to it and understand the concept progressively.
Yes, it clarifies.
So I think the point is that the joint distribtuion of X1 and X2 tells you MORE than that from the distributions of X1 and X2 separately. In a sense, the joint distribtuion of X1 and X2 gives you more complete information (e.g. whether X1 and X2 are independent or not).

The joint distribtuion tells you MORE, so is it correct to say that Statement 2 implies Statement 1? i.e. IF we know that (X1,X2) and (Y1,Y2) have the same JOINT distribution, THEN X1 and Y1 have the same distribution, AND X2 and Y2 have the same distribution. (But the converse is NOT necessarily true.) Am I right?

11. Originally Posted by kingwinner
The joint distribtuion tells you MORE, so is it correct to say that Statement 2 implies Statement 1? i.e. IF we know that (X1,X2) and (Y1,Y2) have the same JOINT distribution, THEN X1 and Y1 have the same distribution, AND X2 and Y2 have the same distribution. (But the converse is NOT necessarily true.) Am I right?
Yes, you are. The joint distribution contains the distribution of the marginals (the "coordinates"), and Statement 1 tells that the marginals are the same.

In fact, the joint distribution of (X1,X2) simply tells you everything you need in order to compute anything about X1 and X2.