# Order Statistics, N independent uniform random variables

• March 21st 2009, 05:53 PM
utopiaNow
Order Statistics, N independent uniform random variables
Hi everyone,

I apologize as I don't know LaTex, so formatting will be off, I'll try my best though.

Here is the question:
Let X1, ...,Xn be independent uniform random variables in (0, 1). Also, let
M = max(X1, ...,Xn) and L = min(X1, ...,Xn).

Find the probability that the maximum is greater than 0.7 if you already
know that all X1, ...,Xn are less or equal to 0.8

My attempt at a solution:

P(M > 0.7 | (X1,...,XN) <= 0.8)

= P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (P((X1,...,XN) <= 0.8))

I know the denominator is equivalent to saying:

P(X1 <=0.8) and P(X2 <=0.8)....P(XN <=0.8)

Since they are independent the cumulative probabilities can be multiplied to obtain to obtain the value of the denominator.
So take the integral from 0 to 0.8 of a uniform distribution, which will just give you 0.8. Since there are N urv's we get:
0.8^N

Therefore we have
P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (0.8^N)

Now the part I'm having trouble with is the set of intersection, can we simply multiply the two probabilities in the numerator to obtain the intersection probability? I don't think we can, I just can't pin it down properly. Any insights?

Thanks
• March 21st 2009, 06:18 PM
utopiaNow
Addendum to my proposed solution
P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (0.8^N)

I think I have a proposed way to get the intersection.
Since the maximum will be greater 0.7 and we know all the variables have an upper limit of 0.8.

The cumulative distribution function of the maximum will be evaluated with the limits of integration being 0.7 for the lower bound and 0.8 for the upper bound.

The pdf of the maximum of N independent uniform random variables is given by
f(x) = n * (x^(n-1))

So the integral from 0.7 to 0.8 will give us $(0.8^N) - (0.7^N)$.

And taking the denominator I mentioned above getting a final answer of $[(0.8^N) - (0.7^N)] / (0.8^N)$

Does this line of reasoning suit everyone?
• March 21st 2009, 06:28 PM
utopiaNow
My proposed solution doesn't seem right
I don't think my solution is correct because its implying as N -> infinity, the probability approaches 1. That doesn't seem correct. Any thoughts?
• March 21st 2009, 07:34 PM
mr fantastic
Quote:

Originally Posted by utopiaNow
Hi everyone,

I apologize as I don't know LaTex, so formatting will be off, I'll try my best though.

Here is the question:
Let X1, ...,Xn be independent uniform random variables in (0, 1). Also, let
M = max(X1, ...,Xn) and L = min(X1, ...,Xn).

Find the probability that the maximum is greater than 0.7 if you already
know that all X1, ...,Xn are less or equal to 0.8

My attempt at a solution:

P(M > 0.7 | (X1,...,XN) <= 0.8)

= P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (P((X1,...,XN) <= 0.8))

I know the denominator is equivalent to saying:

P(X1 <=0.8) and P(X2 <=0.8)....P(XN <=0.8)

Since they are independent the cumulative probabilities can be multiplied to obtain to obtain the value of the denominator.
So take the integral from 0 to 0.8 of a uniform distribution, which will just give you 0.8. Since there are N urv's we get:
0.8^N

Therefore we have
P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (0.8^N)

Now the part I'm having trouble with is the set of intersection, can we simply multiply the two probabilities in the numerator to obtain the intersection probability? I don't think we can, I just can't pin it down properly. Any insights?

Thanks

Do you know how to get the pdf of $M$ (there's no point me doing it if you can do it yourself).

Then all you have to do is calculate $\Pr(M > 0.7 \, | \, M < 0.8)$.
• March 21st 2009, 07:55 PM
utopiaNow
Quote:

Originally Posted by mr fantastic
Do you know how to get the pdf of $M$ (there's no point me doing it if you can do it yourself).

Then all you have to do is calculate $\Pr(M > 0.7 \, | \, M < 0.8)$.

Hi thanks for the reply. That is the solution I ended up getting if you look at my 1st reply. So the pdf of M i found was f(x) = N * [x ^ ( N - 1) ].

Which led to a final answer of $(0.8^N - 0.7^N) / 0.8^N$

But that implies that as N gets larger the probability will increase and approach 1. For some reason that doesn't seem right to me. Any insights on understanding why as we have more uniform random variables we're almost guaranteed to have a maximum > 0.7 if we know all are < 0.8?
• March 22nd 2009, 05:09 AM
awkward
Quote:

Originally Posted by utopiaNow
Hi thanks for the reply. That is the solution I ended up getting if you look at my 1st reply. So the pdf of M i found was f(x) = N * [x ^ ( N - 1) ].

Which led to a final answer of $(0.8^N - 0.7^N) / 0.8^N$

But that implies that as N gets larger the probability will increase and approach 1. For some reason that doesn't seem right to me. Any insights on understanding why as we have more uniform random variables we're almost guaranteed to have a maximum > 0.7 if we know all are < 0.8?

If you have a lot of random numbers in the range 0 to 0.8, doesn't it seem likely that at least one of them will be greater than 0.7?
• March 22nd 2009, 08:46 PM
WaterMist
I'm wondering how to do parts a, b, and c of the same question:

(a) Find the expectation and variance of M. (Hint: find first the cdf of M
and then the correspondent pdf)
(b) Find the expectation and variance of L.
(c) Find the probability that the minimum is smaller than 0.4 if you already
know that X1 = 0.5.

I know what cdf and pdf are, but I'm not sure how I'm supposed to find it...
So far I've started this much:
a) E(x) = integal(0->1) x dx

Help is really appreciated, thanks!
• March 22nd 2009, 09:15 PM
utopiaNow
Quote:

Originally Posted by WaterMist
I'm wondering how to do parts a, b, and c of the same question:

(a) Find the expectation and variance of M. (Hint: find first the cdf of M
and then the correspondent pdf)
(b) Find the expectation and variance of L.
(c) Find the probability that the minimum is smaller than 0.4 if you already
know that X1 = 0.5.

I know what cdf and pdf are, but I'm not sure how I'm supposed to find it...
So far I've started this much:
a) E(x) = integal(0->1) x dx

Help is really appreciated, thanks!

Hi WaterMist,

Here's how I found the corresponding pdf:
Imagine when these N random variables are ordered, and say you want the pdf of a variable being in the $X_{(j)}$ position.

Then you know a few things. 1st that j - 1 values must be smaller than this value. And that n - j values must be bigger than this value. And finally only one value in this position. So the density function is given by:
$
[F(x)]^{j - 1}*[1 - F(x)]^{n - j}*f(x)
$

Where F(x) is the cdf.

K that's fine but now we have
$
\frac{n!}{(n - j)!(j - 1)!}
$

ways of ordering this arrangement.

So the final pdf of the $X_{(j)}$ value is:
$
\frac{n!}{(n - j)!(j - 1)!} *[F(x)^{j - 1}]*[1 - F(x)^{n - j}]*f(x)
$

And for the maximum you want the pdf when j = n and for minimum you want the pdf when j = 1. And the F(x) and f(x) just correspond to the cdf and pdf of a uniform distribution on the interval (0,1).

When you get the pdf of j = n and j = 1, it should remind you of a type of distribution we covered in class. Which should lead you to the formula for mean and variance. For extra fun you can derive those if you want. LOL.

Hope this helps!
• March 22nd 2009, 10:05 PM
WaterMist
So I have..

[n!/(n-j)!(j-1)! ] * [1 - 1/(b-a)]^(j-1) * [(x-a)(b-a)]

But I'm wondering what are (a,b)?

I assumed it would be (0,1) since is a uniform distribution, but in that case my equation for pdf = 0 as (1 - (1/1-0)^(n-1)) will always equal 1-1=0 no matter the value of n...

Also by definition of a uniform distribution aren't the E[X] = (a+b)/2 = and Var(X) = [(b-a)^2]/12?

I'm so confused XD

(I've been sick for a week so I've missed quite a bit)
• March 22nd 2009, 10:12 PM
utopiaNow
Quote:

Originally Posted by utopiaNow
Hi WaterMist,

Here's how I found the corresponding pdf:
Imagine when these N random variables are ordered, and say you want the pdf of a variable being in the $X_{(j)}$ position.

Then you know a few things. 1st that j - 1 values must be smaller than this value. And that n - j values must be bigger than this value. And finally only one value in this position. So the density function is given by:
$
[F(x)^{j - 1}]*[1 - F(x)]^{n - j}*f(x)
$

Where F(x) is the cdf.

K that's fine but now we have
$
\frac{n!}{(n - j)!(j - 1)!}
$

ways of ordering this arrangement.

So the final pdf of the $X_{(j)}$ value is:
$
\frac{n!}{(n - j)!(j - 1)!} *[F(x)^{j - 1}]*[1 - F(x)^{n - j}]*f(x)
$

And for the maximum you want the pdf when j = n and for minimum you want the pdf when j = 1. And the F(x) and f(x) just correspond to the cdf and pdf of a uniform distribution on the interval (0,1).

When you get the pdf of j = n and j = 1, it should remind you of a type of distribution we covered in class. Which should lead you to the formula for mean and variance. For extra fun you can derive those if you want. LOL.

Hope this helps!

Oops I noticed I had the wrong exponent up there, I had j - 1 twice. The exponent for 1 - F[x] should be n - j, so I have it fixed now. Sorry about that, I was in a rush.