Page 1 of 2 12 LastLast
Results 1 to 15 of 16

Math Help - uniform random variables

  1. #1
    Banned
    Joined
    Feb 2009
    Posts
    17

    uniform random variables

    Let the variable be
    Yc = theta_1 + theta_1*theta_2+ theta_1*theta_2 theta_3 + ....
    where theta_i are i.i.d uniform random variables (i = 1,2,...) having

    CDF such that F(x) = 0, x <=0. F(x) = x,
    F(x) = 0<x<c and
    F(x) = 1 , for x >=c.

    Question,
    1)for what values of c > 0 is E(Yc) finite.
    2)Also, for what value of c > 0, Yc (sum converges with probability 1).
    Last edited by mr fantastic; March 8th 2009 at 08:30 PM. Reason: Deleted question restored.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Member
    Joined
    Jul 2008
    Posts
    138
    Ok so \theta_i are IID, Uniform on [0,c]. And
    Y_c=\sum_{n=1}^\infty\prod_{m=1}^{n}\theta_m

    Well the expectation of the sum is the sum of the expectations. And the expectation of the product, if independent, is the product of the expectations. So

    E[Y_c] = E[\sum_{n=1}^\infty\prod_{m=1}^{n}\theta_m]

     = \sum_{n=1}^\infty\prod_{m=1}^{n}E[\theta_m]

     =  \sum_{n=1}^\infty\prod_{m=1}^{n} c/2

     = \sum_{n=1}^\infty (c/2)^n

    Which is a geometric series (minus the zeroth term). We know that converges for ...?

    As for the second part, ask yourself this question. If the expectation is finite, could the probability of Y_c being infinite be non-zero? So doesn't the answer to part 1 answer part 2?
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Banned
    Joined
    Feb 2009
    Posts
    17
    I did exactly as you have done now, but there is a small error in your E(X). It would be c^2/2 as f(x) = 1 for [0,c], which is somewhat different than the conventional f(x) of Uniform distribution which would be 1/c. However,the answer (I was told is wrong). Hence, I posted the question. Not sure what could be the catch?

    Quote Originally Posted by meymathis View Post
    Ok so \theta_i are IID, Uniform on [0,c]. And
    Y_c=\sum_{n=1}^\infty\prod_{m=1}^{n}\theta_m

    Well the expectation of the sum is the sum of the expectations. And the expectation of the product, if independent, is the product of the expectations. So

    E[Y_c] = E[\sum_{n=1}^\infty\prod_{m=1}^{n}\theta_m]

     = \sum_{n=1}^\infty\prod_{m=1}^{n}E[\theta_m]

     = \sum_{n=1}^\infty\prod_{m=1}^{n} c/2

     = \sum_{n=1}^\infty (c/2)^n

    Which is a geometric series (minus the zeroth term). We know that converges for ...?

    As for the second part, ask yourself this question. If the expectation is finite, could the probability of Y_c being infinite be non-zero? So doesn't the answer to part 1 answer part 2?
    Last edited by mr fantastic; March 8th 2009 at 08:32 PM. Reason: Deleted post restored
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Member
    Joined
    Jul 2008
    Posts
    138
    Quote Originally Posted by cryptic26 View Post
    I did exactly as you have done now, but there is a small error in your E(X). It would be c^2/2 as f(x) = 1 for [0,c], which is somewhat different than the conventional f(x) of Uniform distribution which would be 1/c. However,the answer (I was told is wrong). Hence, I posted the question. Not sure what could be the catch?
    You have:
    CDF such that F(x) = 0, x <=0. F(x) = x,
    F(x) = 0<x<c and
    F(x) = 1 , for x >=c.

    Which I assumed was (since you said uniform in the title, I didn't look more closely):
    CDF such that F(x) = 0 for x <=0.
    F(x) = x, for 0<x<c and
    F(x) = 1 , for x >=c.

    Now that I look at this, this is NOT a CDF, unless c<1. A CDF has to be non-decreasing, and it most be 0 as x\rightarrow -\infty and must be 1 and x\rightarrow \infty. If c>1 then F would not be non-decreasing.

    You have uniform distribution in your title. Were you told that this was a problem concerning uniform distributions somewhere? This is certainly NOT a uniform distribution. If this is supposed to be about the uniform distributions, then I would think that the problem should have been stated as:
    CDF such that F(x) = 0 for x <=0.
    F(x) = x/c, for 0<x<c and
    F(x) = 1 , for x >=c.
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Member
    Joined
    Jul 2008
    Posts
    138
    Quote Originally Posted by meymathis View Post
    Now that I look at this, this is NOT a CDF, unless c<1. A CDF has to be non-decreasing, and it most be 0 as x\rightarrow -\infty and must be 1 as x\rightarrow \infty.
    I made a typo which I underlined in italics above.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Banned
    Joined
    Feb 2009
    Posts
    17
    If that is what you think, then write a letter to the author of the book who wrote this question . No one told you that it is a standard uniform variable. You just made that assumption.

    Quote Originally Posted by meymathis View Post
    You have:

    You have uniform distribution in your title. Were you told that this was a problem concerning uniform distributions somewhere? This is certainly NOT a uniform distribution. If this is supposed to be about the uniform distributions, then I would think that the problem should have been stated as:
    CDF such that F(x) = 0 for x <=0.
    F(x) = x/c, for 0<x<c and
    F(x) = 1 , for x >=c.
    Last edited by mr fantastic; March 8th 2009 at 08:35 PM. Reason: Deleted post restored
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Banned
    Joined
    Feb 2009
    Posts
    17

    Sum and product of uniform variables.

    I revised the title, in case it helps. This is what I wrote in my first post.

    Let the variable be
    Yc = theta_1 + theta_1*theta_2+ theta_1*theta_2 theta_3 + ....
    where theta_i are i.i.d uniform random variables (i = 1,2,...) having the CDF that I mentioned in three cases.

    This is where the uniform random variables comes into the title.


    Quote Originally Posted by cryptic26 View Post
    I think you are making an unnecessary fuss over the title. I wrote the question as it was in the text book, where it says we have uniform i.i.d random variables having CDF given as (three cases). If what I wrote is not clear, then the original question is no better.

    Having said that, for c in (-inf, inf), the CDF is non decreasing.
    As c >= inf , F(x) =1.
    c <= -inf, F(x) = 0
    and anywhere in between, F(x) = c, which is non decreasing. So, there is nothing wrong with the CDF function.
    Last edited by mr fantastic; March 8th 2009 at 08:34 PM. Reason: Deleted post restored
    Follow Math Help Forum on Facebook and Google+

  8. #8
    Banned
    Joined
    Feb 2009
    Posts
    17

    Send to the author

    I think you are making an unnecessary fuss over the title. I wrote the question as it was in the text book, where it says we have uniform i.i.d random variables having CDF given as (three cases). If what I wrote is not clear, then the original question is no better.

    Having said that, for c in (-inf, inf), the CDF is non decreasing.
    As c >= inf , F(x) =1.
    c <= -inf, F(x) = 0
    and anywhere in between, F(x) = c, which is non decreasing. So, there is nothing wrong with the CDF function.


    Quote Originally Posted by meymathis View Post
    You have:
    CDF such that F(x) = 0, x <=0. F(x) = x,
    F(x) = 0<x<c and
    F(x) = 1 , for x >=c.

    Which I assumed was (since you said uniform in the title, I didn't look more closely):
    CDF such that F(x) = 0 for x <=0.
    F(x) = x, for 0<x<c and
    F(x) = 1 , for x >=c.

    Now that I look at this, this is NOT a CDF, unless c<1. A CDF has to be non-decreasing, and it most be 0 as x\rightarrow -\infty and must be 1 and x\rightarrow \infty. If c>1 then F would not be non-decreasing.

    You have uniform distribution in your title. Were you told that this was a problem concerning uniform distributions somewhere? This is certainly NOT a uniform distribution. If this is supposed to be about the uniform distributions, then I would think that the problem should have been stated as:
    CDF such that F(x) = 0 for x <=0.
    F(x) = x/c, for 0<x<c and
    F(x) = 1 , for x >=c.
    Last edited by mr fantastic; March 8th 2009 at 08:32 PM. Reason: Deleted post restored
    Follow Math Help Forum on Facebook and Google+

  9. #9
    Banned
    Joined
    Feb 2009
    Posts
    17
    Quote Originally Posted by meymathis View Post
    I made a typo which I underlined in italics above.
    In fact, I think the part of the problem, is also to find the "c" for which the CDF is defined. So, you are saying that if c >1, then the given CDF is not monotonically increasing. That makes sense. This would mean both E(Yc) and Yc shall be finite.
    Follow Math Help Forum on Facebook and Google+

  10. #10
    Member
    Joined
    Jul 2008
    Posts
    138
    Quote Originally Posted by cryptic26 View Post
    Let the variable be
    Yc = theta_1 + theta_1*theta_2+ theta_1*theta_2 theta_3 + ....
    where theta_i are i.i.d uniform random variables (i = 1,2,...) having

    CDF such that F(x) = 0, x <=0. F(x) = x,
    F(x) = 0<x<c and
    F(x) = 1 , for x >=c.

    Question,
    1)for what values of c > 0 is E(Yc) finite.
    2)Also, for what value of c > 0, Yc (sum converges with probability 1).
    You seem to be misunderstanding me on many accounts.

    The problem as stated said "where theta_i are i.i.d uniform random variables". The phrase "Uniform random variables" has a very specific meaning in probability theory. It means they have the Uniform_distribution. It doesn't mean that they are the same. IID means independent and identically distributed. Why would someone need to say the \theta_i are the same twice?

    I never assumed they were standard uniform. What I said, and now I think you understand, is that if C>1, then we don't get valid distributions. The distribution of \theta_i is only uniform (of any kind) if C=1. Otherwise it is NOT a uniform distribution. For 0<C<1 you would get valid CDF, but it would not be a uniform distribution. That is why, I think the author just forgot to put the "/c" in the CDF. Then you get a problem about uniform random variables, which was specifically stated in the text of the problem.

    I think the problem in the book has a typo in it. What is so scandalous about that? I have never seen a book that did not have a typo in it.

    You are confusing me on what the book actually says:
    In one place you have (CDF #1):
    CDF such that F(x) = 0, x <=0. F(x) = x,
    F(x) = 0<x<c and
    F(x) = 1 , for x >=c.
    and in another (CDF #2)
    As c >= inf , F(x) =1.
    c <= -inf, F(x) = 0
    and anywhere in between, F(x) = c, which is non decreasing. So, there is nothing wrong with the CDF function.
    The first one doesn't make sense because you (the author?) say that F(x)=x, F(x)=0<x<c. Do you mean F(x)=x for 0<x<c?

    The second one IS a CDF of a non-uniform RV. It is an RV with mass only at 0 and c. Now F is constant rather than linear in (0,c).

    In another post you said
    I did exactly as you have done now, but there is a small error in your E(X). It would be c^2/2 as f(x) = 1 for [0,c], which is somewhat different than the conventional f(x) of Uniform distribution which would be 1/c. However,the...
    Here you say the PDF is 1 in [0,C]. Which corroborates the amended CDF #1.

    So if you don't believe that the Author made a mistake and this (amended CDF #1) is what they meant, fine. Then you did not calculate the expected value of this partially continuous, partially discrete distribution correctly. There is a certain amount of mass at C, unlike every other point in the distribution; P(\theta_i=C)=1-C (whereas for all continuous RV's, this would be 0). This is easy to see since F(C)=1 but F(C-\epsilon)=C-\epsilon So P(\theta_i=C)=\lim_{\epsilon \rightarrow 0^+}F(C)-F(C-\epsilon)=1-C.

    So E[\theta_i] = C^2/2+(1-C)C = C-C^2/2
    The first term you had right. But you were missing the discrete mass at C. To prove convergence, since C<=1, |C-C^2/2|<C. Thus you can bound the infinite series with a geometric series. For C<1, that geometric series converges, and therefore so does the one you care about. If C=1, then E[\theta_i] = 1/2. You get a geometric series which converges.
    Follow Math Help Forum on Facebook and Google+

  11. #11
    Member
    Joined
    Jul 2008
    Posts
    138
    Quote Originally Posted by cryptic26 View Post
    In fact, I think the part of the problem, is also to find the "c" for which the CDF is defined. So, you are saying that if c >1, then the given CDF is not monotonically increasing. That makes sense. This would mean both E(Yc) and Yc shall be finite.
    This confuses me too. If it is not a valid CDF, then "E(Yc) and Yc shall be finite" is nonsensical since \theta_i are not random variables.
    Follow Math Help Forum on Facebook and Google+

  12. #12
    Banned
    Joined
    Feb 2009
    Posts
    17
    Let me thank you for detailed response, first of all. Secondly, no, I am not the author. And thirdly, I wish I knew how to edit the posts instead of creating new posts. The confusion is partly on account of multiple posts on my account.

    It is true that the problem could have a typo. But, at the same time, the problem could be intentionally the way it is to make confuse the students.

    Here is what is in the book about the CDF of the theta(s), which are i.i.d uniform random variables. This is as stated in the book.

    CDF such that F(x) = 0, x <=0.

    F(x) = x, 0<x<c and
    F(x) = 1 , for x >=c.

    You correctly pointed out that F(x) is not monotonous for c > 1. Hence, F(x) to be CDF one has a boundary on c <=1.

    The next part is the E(Yc) = Integral (x.dx) for limits from 0 to C. Hence, I got infinite series with c^2/2 + .... + ...
    which is convergent as long as 0 < c <=1.

    Also, if c <1, then the infinite series, Yc shall be convergent with probability 1. (which is the second part). Please correct me again, if I am wrong or something is unclear
    Last edited by mr fantastic; March 8th 2009 at 08:37 PM. Reason: Deleted post restored
    Follow Math Help Forum on Facebook and Google+

  13. #13
    Banned
    Joined
    Feb 2009
    Posts
    17
    Quote Originally Posted by meymathis View Post
    This confuses me too. If it is not a valid CDF, then "E(Yc) and Yc shall be finite" is nonsensical since \theta_i are not random variables.
    Discarding something as nonsense without careful consideration can be a quite dangerous.
    Follow Math Help Forum on Facebook and Google+

  14. #14
    Member
    Joined
    Jul 2008
    Posts
    138
    Your welcome. Let's try to step back. Here are the major points of the proof:

    Ok so \theta_i are IID, with distribution given by:
    CDF such that F(x) = 0, x <=0.

    F(x) = x, 0<x<c and
    F(x) = 1 , for x >=c.

    THIS IS NOT A CDF IF C>1 since F(x) would not be a non-decreasing function. An RV must be described by a valid CDF. Therefore this problem only makes sense if C\leq 1 since Y_c IS UNDEFINED otherwise (it is defined as a function of ILL-DEFINED components, \theta_i. (It is nonsense to discuss Y_c for c>1, which I say after careful consideration).

    So:

    Y_c=\sum_{n=1}^\infty\prod_{m=1}^{n}\theta_m

    Well the expectation of the sum is the sum of the expectations. And the expectation of the product, if independent, is the product of the expectations.

    In a previous post, I showed (or tried to show) that E[\theta_i]=c-c^2/2. It is not c^2/2 because this ignores all of the mass at c. E[\theta_i] has a strangeish distribution since the CDF is given by a discontinuous function (for C\neq 1. This is a very important point that you need to think about. For always continuous functions like we normally work with P(X=x_0)=0 since P(X=x_0) = \lim_{\epsilon \rightarrow o+} P(X\leq x_0)-P(X\leq x_0-\epsilon)=\lim_{\epsilon \rightarrow o+} F_X(x_0)-F_X(x_0-\epsilon). The last expression is 0 if F is continuous. But our F is not continuous at c (if C\neq 1). To compute E[\theta_i] you have to mix your approach of continuous and discrete techniques. E[\theta_i]=\int_0^c xdx+c\,P(\theta_i=c)=c^2/2 + c(1-c)

    So

    E[Y_c] = E[\sum_{n=1}^\infty\prod_{m=1}^{n}\theta_m]

     = \sum_{n=1}^\infty\prod_{m=1}^{n}E[\theta_m]

     =  \sum_{n=1}^\infty\prod_{m=1}^{n} (c-c^2/2)

     =  \sum_{n=1}^\infty(c-c^2/2)^n

    If  0\leq c\leq 1 then the fact that  c-c/2 \leq c-c^2/2 \leq c implies that 0\leq c-c^2/2 \leq 1

    Also, I didn't mean to imply you were the author. I meant to say either you or the author was saying....
    Follow Math Help Forum on Facebook and Google+

  15. #15
    Banned
    Joined
    Feb 2009
    Posts
    17
    Makes sense totally. I was already thinking of the discontinuity problem of F(x). If C > 1, then although, it is still continuous at 1 but if C > 1 , then for all values of x > 1, F(x+h) = 1 where as F(x) = x.

    In fact, let me tell you that this problem is from a manuscript that shall be hopefully a published text book. The author is a retired professor of Statistics. I am helping solving some problems. I also try and post some other problems that I am learning for my own edification. In fact, the author told me that this problem is not obvious as it seems and you need careful consideration to be able to understand. As such, it is a sub-part of another huge problem. Thanks.

    Quote Originally Posted by meymathis View Post
    Your welcome. Let's try to step back. Here are the major points of the proof:

    Ok so \theta_i are IID, with distribution given by:
    CDF such that F(x) = 0, x <=0.

    F(x) = x, 0<x<c and
    F(x) = 1 , for x >=c.

    THIS IS NOT A CDF IF C>1 since F(x) would not be a non-decreasing function. An RV must be described by a valid CDF. Therefore this problem only makes sense if C\leq 1 since Y_c IS UNDEFINED otherwise (it is defined as a function of ILL-DEFINED components, \theta_i. (It is nonsense to discuss Y_c for c>1, which I say after careful consideration).

    So:

    Y_c=\sum_{n=1}^\infty\prod_{m=1}^{n}\theta_m

    Well the expectation of the sum is the sum of the expectations. And the expectation of the product, if independent, is the product of the expectations.

    In a previous post, I showed (or tried to show) that E[\theta_i]=c-c^2/2. It is not c^2/2 because this ignores all of the mass at c. E[\theta_i] has a strangeish distribution since the CDF is given by a discontinuous function (for C\neq 1. This is a very important point that you need to think about. For always continuous functions like we normally work with P(X=x_0)=0 since P(X=x_0) = \lim_{\epsilon \rightarrow o+} P(X\leq x_0)-P(X\leq x_0-\epsilon)=\lim_{\epsilon \rightarrow o+} F_X(x_0)-F_X(x_0-\epsilon). The last expression is 0 if F is continuous. But our F is not continuous at c (if C\neq 1). To compute E[\theta_i] you have to mix your approach of continuous and discrete techniques. E[\theta_i]=\int_0^c xdx+c\,P(\theta_i=c)=c^2/2 + c(1-c)

    So

    E[Y_c] = E[\sum_{n=1}^\infty\prod_{m=1}^{n}\theta_m]

     = \sum_{n=1}^\infty\prod_{m=1}^{n}E[\theta_m]

     = \sum_{n=1}^\infty\prod_{m=1}^{n} (c-c^2/2)

     = \sum_{n=1}^\infty(c-c^2/2)^n

    If  0\leq c\leq 1 then the fact that  c-c/2 \leq c-c^2/2 \leq c implies that 0\leq c-c^2/2 \leq 1

    Also, I didn't mean to imply you were the author. I meant to say either you or the author was saying....
    Last edited by mr fantastic; March 8th 2009 at 08:37 PM. Reason: Restored deleted post
    Follow Math Help Forum on Facebook and Google+

Page 1 of 2 12 LastLast

Similar Math Help Forum Discussions

  1. independent random variables and a uniform distribution...
    Posted in the Advanced Statistics Forum
    Replies: 7
    Last Post: August 5th 2011, 05:13 AM
  2. Sum of Two Independent Random Variables (uniform)
    Posted in the Advanced Statistics Forum
    Replies: 3
    Last Post: February 4th 2011, 01:58 AM
  3. Replies: 1
    Last Post: November 26th 2009, 02:50 PM
  4. Independent Uniform Random Variables
    Posted in the Advanced Statistics Forum
    Replies: 4
    Last Post: November 25th 2009, 04:03 AM
  5. Distribution of the sum of two ind. uniform random variables.
    Posted in the Advanced Statistics Forum
    Replies: 4
    Last Post: February 5th 2009, 04:30 PM

Search Tags


/mathhelpforum @mathhelpforum