Results 1 to 10 of 10

Math Help - Order Statistics, N independent uniform random variables

  1. #1
    Junior Member utopiaNow's Avatar
    Joined
    Mar 2009
    Posts
    72
    Thanks
    1

    Order Statistics, N independent uniform random variables

    Hi everyone,

    I apologize as I don't know LaTex, so formatting will be off, I'll try my best though.

    Here is the question:
    Let X1, ...,Xn be independent uniform random variables in (0, 1). Also, let
    M = max(X1, ...,Xn) and L = min(X1, ...,Xn).

    Find the probability that the maximum is greater than 0.7 if you already
    know that all X1, ...,Xn are less or equal to 0.8

    My attempt at a solution:

    P(M > 0.7 | (X1,...,XN) <= 0.8)

    = P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (P((X1,...,XN) <= 0.8))

    I know the denominator is equivalent to saying:

    P(X1 <=0.8) and P(X2 <=0.8)....P(XN <=0.8)

    Since they are independent the cumulative probabilities can be multiplied to obtain to obtain the value of the denominator.
    So take the integral from 0 to 0.8 of a uniform distribution, which will just give you 0.8. Since there are N urv's we get:
    0.8^N

    Therefore we have
    P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (0.8^N)

    Now the part I'm having trouble with is the set of intersection, can we simply multiply the two probabilities in the numerator to obtain the intersection probability? I don't think we can, I just can't pin it down properly. Any insights?

    Thanks
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Junior Member utopiaNow's Avatar
    Joined
    Mar 2009
    Posts
    72
    Thanks
    1

    Addendum to my proposed solution

    P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (0.8^N)

    I think I have a proposed way to get the intersection.
    Since the maximum will be greater 0.7 and we know all the variables have an upper limit of 0.8.

    The cumulative distribution function of the maximum will be evaluated with the limits of integration being 0.7 for the lower bound and 0.8 for the upper bound.

    The pdf of the maximum of N independent uniform random variables is given by
    f(x) = n * (x^(n-1))

    So the integral from 0.7 to 0.8 will give us (0.8^N) - (0.7^N).

    And taking the denominator I mentioned above getting a final answer of [(0.8^N) - (0.7^N)] / (0.8^N)

    Does this line of reasoning suit everyone?
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Junior Member utopiaNow's Avatar
    Joined
    Mar 2009
    Posts
    72
    Thanks
    1

    My proposed solution doesn't seem right

    I don't think my solution is correct because its implying as N -> infinity, the probability approaches 1. That doesn't seem correct. Any thoughts?
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Flow Master
    mr fantastic's Avatar
    Joined
    Dec 2007
    From
    Zeitgeist
    Posts
    16,948
    Thanks
    5
    Quote Originally Posted by utopiaNow View Post
    Hi everyone,

    I apologize as I don't know LaTex, so formatting will be off, I'll try my best though.

    Here is the question:
    Let X1, ...,Xn be independent uniform random variables in (0, 1). Also, let
    M = max(X1, ...,Xn) and L = min(X1, ...,Xn).

    Find the probability that the maximum is greater than 0.7 if you already
    know that all X1, ...,Xn are less or equal to 0.8

    My attempt at a solution:

    P(M > 0.7 | (X1,...,XN) <= 0.8)

    = P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (P((X1,...,XN) <= 0.8))

    I know the denominator is equivalent to saying:

    P(X1 <=0.8) and P(X2 <=0.8)....P(XN <=0.8)

    Since they are independent the cumulative probabilities can be multiplied to obtain to obtain the value of the denominator.
    So take the integral from 0 to 0.8 of a uniform distribution, which will just give you 0.8. Since there are N urv's we get:
    0.8^N

    Therefore we have
    P(M > 0.7 (intersection) (X1,...,XN) <= 0.8)/ (0.8^N)

    Now the part I'm having trouble with is the set of intersection, can we simply multiply the two probabilities in the numerator to obtain the intersection probability? I don't think we can, I just can't pin it down properly. Any insights?

    Thanks
    Do you know how to get the pdf of M (there's no point me doing it if you can do it yourself).

    Then all you have to do is calculate \Pr(M > 0.7 \, | \, M < 0.8).
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Junior Member utopiaNow's Avatar
    Joined
    Mar 2009
    Posts
    72
    Thanks
    1
    Quote Originally Posted by mr fantastic View Post
    Do you know how to get the pdf of M (there's no point me doing it if you can do it yourself).

    Then all you have to do is calculate \Pr(M > 0.7 \, | \, M < 0.8).
    Hi thanks for the reply. That is the solution I ended up getting if you look at my 1st reply. So the pdf of M i found was f(x) = N * [x ^ ( N - 1) ].

    Which led to a final answer of  (0.8^N - 0.7^N) / 0.8^N

    But that implies that as N gets larger the probability will increase and approach 1. For some reason that doesn't seem right to me. Any insights on understanding why as we have more uniform random variables we're almost guaranteed to have a maximum > 0.7 if we know all are < 0.8?
    Last edited by utopiaNow; March 21st 2009 at 08:55 PM. Reason: said something wrong
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Super Member
    Joined
    Mar 2008
    Posts
    934
    Thanks
    33
    Awards
    1
    Quote Originally Posted by utopiaNow View Post
    Hi thanks for the reply. That is the solution I ended up getting if you look at my 1st reply. So the pdf of M i found was f(x) = N * [x ^ ( N - 1) ].

    Which led to a final answer of  (0.8^N - 0.7^N) / 0.8^N

    But that implies that as N gets larger the probability will increase and approach 1. For some reason that doesn't seem right to me. Any insights on understanding why as we have more uniform random variables we're almost guaranteed to have a maximum > 0.7 if we know all are < 0.8?
    If you have a lot of random numbers in the range 0 to 0.8, doesn't it seem likely that at least one of them will be greater than 0.7?
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Newbie
    Joined
    Mar 2009
    Posts
    5
    I'm wondering how to do parts a, b, and c of the same question:

    (a) Find the expectation and variance of M. (Hint: find first the cdf of M
    and then the correspondent pdf)
    (b) Find the expectation and variance of L.
    (c) Find the probability that the minimum is smaller than 0.4 if you already
    know that X1 = 0.5.

    I know what cdf and pdf are, but I'm not sure how I'm supposed to find it...
    So far I've started this much:
    a) E(x) = integal(0->1) x dx


    Help is really appreciated, thanks!
    Follow Math Help Forum on Facebook and Google+

  8. #8
    Junior Member utopiaNow's Avatar
    Joined
    Mar 2009
    Posts
    72
    Thanks
    1
    Quote Originally Posted by WaterMist View Post
    I'm wondering how to do parts a, b, and c of the same question:

    (a) Find the expectation and variance of M. (Hint: find first the cdf of M
    and then the correspondent pdf)
    (b) Find the expectation and variance of L.
    (c) Find the probability that the minimum is smaller than 0.4 if you already
    know that X1 = 0.5.

    I know what cdf and pdf are, but I'm not sure how I'm supposed to find it...
    So far I've started this much:
    a) E(x) = integal(0->1) x dx


    Help is really appreciated, thanks!
    Hi WaterMist,

    Here's how I found the corresponding pdf:
    Imagine when these N random variables are ordered, and say you want the pdf of a variable being in the  X_{(j)} position.

    Then you know a few things. 1st that j - 1 values must be smaller than this value. And that n - j values must be bigger than this value. And finally only one value in this position. So the density function is given by:
    <br />
[F(x)]^{j - 1}*[1 - F(x)]^{n - j}*f(x)<br />

    Where F(x) is the cdf.

    K that's fine but now we have
    <br />
\frac{n!}{(n - j)!(j - 1)!}<br />

    ways of ordering this arrangement.

    So the final pdf of the  X_{(j)} value is:
    <br />
\frac{n!}{(n - j)!(j - 1)!} *[F(x)^{j - 1}]*[1 - F(x)^{n - j}]*f(x)<br />

    And for the maximum you want the pdf when j = n and for minimum you want the pdf when j = 1. And the F(x) and f(x) just correspond to the cdf and pdf of a uniform distribution on the interval (0,1).

    When you get the pdf of j = n and j = 1, it should remind you of a type of distribution we covered in class. Which should lead you to the formula for mean and variance. For extra fun you can derive those if you want. LOL.

    Hope this helps!
    Last edited by utopiaNow; March 23rd 2009 at 12:14 AM.
    Follow Math Help Forum on Facebook and Google+

  9. #9
    Newbie
    Joined
    Mar 2009
    Posts
    5
    So I have..

    [n!/(n-j)!(j-1)! ] * [1 - 1/(b-a)]^(j-1) * [(x-a)(b-a)]

    But I'm wondering what are (a,b)?

    I assumed it would be (0,1) since is a uniform distribution, but in that case my equation for pdf = 0 as (1 - (1/1-0)^(n-1)) will always equal 1-1=0 no matter the value of n...

    Also by definition of a uniform distribution aren't the E[X] = (a+b)/2 = and Var(X) = [(b-a)^2]/12?

    I'm so confused XD


    (I've been sick for a week so I've missed quite a bit)
    Follow Math Help Forum on Facebook and Google+

  10. #10
    Junior Member utopiaNow's Avatar
    Joined
    Mar 2009
    Posts
    72
    Thanks
    1
    Quote Originally Posted by utopiaNow View Post
    Hi WaterMist,

    Here's how I found the corresponding pdf:
    Imagine when these N random variables are ordered, and say you want the pdf of a variable being in the  X_{(j)} position.

    Then you know a few things. 1st that j - 1 values must be smaller than this value. And that n - j values must be bigger than this value. And finally only one value in this position. So the density function is given by:
    <br />
[F(x)^{j - 1}]*[1 - F(x)]^{n - j}*f(x)<br />

    Where F(x) is the cdf.

    K that's fine but now we have
    <br />
\frac{n!}{(n - j)!(j - 1)!}<br />

    ways of ordering this arrangement.

    So the final pdf of the  X_{(j)} value is:
    <br />
\frac{n!}{(n - j)!(j - 1)!} *[F(x)^{j - 1}]*[1 - F(x)^{n - j}]*f(x)<br />

    And for the maximum you want the pdf when j = n and for minimum you want the pdf when j = 1. And the F(x) and f(x) just correspond to the cdf and pdf of a uniform distribution on the interval (0,1).

    When you get the pdf of j = n and j = 1, it should remind you of a type of distribution we covered in class. Which should lead you to the formula for mean and variance. For extra fun you can derive those if you want. LOL.

    Hope this helps!
    Oops I noticed I had the wrong exponent up there, I had j - 1 twice. The exponent for 1 - F[x] should be n - j, so I have it fixed now. Sorry about that, I was in a rush.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. independent random variables and a uniform distribution...
    Posted in the Advanced Statistics Forum
    Replies: 7
    Last Post: August 5th 2011, 06:13 AM
  2. Sum of Two Independent Random Variables (uniform)
    Posted in the Advanced Statistics Forum
    Replies: 3
    Last Post: February 4th 2011, 02:58 AM
  3. Independent Random Variables
    Posted in the Advanced Statistics Forum
    Replies: 3
    Last Post: December 1st 2010, 04:52 AM
  4. Independent Uniform Random Variables
    Posted in the Advanced Statistics Forum
    Replies: 4
    Last Post: November 25th 2009, 05:03 AM
  5. uniform distribution of random independent variables
    Posted in the Advanced Statistics Forum
    Replies: 5
    Last Post: December 1st 2008, 04:45 AM

Search Tags


/mathhelpforum @mathhelpforum