I checked over my work again and I think the answer should be correct.
But do we also need the assumption of independence between N and those X_i ?
If so, in which step do we actually have to USE this independence?
Thanks!
Let T be a constant, and let N be a random variable.
Suppose {X_1,X_2,...} are independent, and each X_i follows continuous uniform(0,T) distribution.
I would like to compute Var[(NT-(X_1+X_2+...X_N))|N].
Attempt:
Var[(NT-(X_1+X_2+...X_N))|N]
=(-1)^2 *Var[((X_1+X_2+...X_N))|N] (since we are given N, NT is treated as a constant and I am using the fact that Var(aX+b)=a^2 *Var(X) )
=Var[((X_1+X_2+...X_N))|N]
=N Var(X_1) (since the X_i's are i.i.d.)
=N (T^2)/12
Am I right?? (in particular the reasoning in the step colored in red?)
I am not feeling so confident about this answer, so please confirm or correct me if I am wrong. Thank you!
[note: also under discussion in talk stats forum]
I too think this is correct. The independence is used in this step: . Indeed, not only are the i.i.d., but above all (that's what you're using) they are i.i.d. conditionally to (because they are independent of ), which gives . And they are independent of , hence .
NB: you could also say that and the r.v.'s are i.i.d. uniform on again (we performed a symmetry, hence they have same law as the 's. Thus (again using the independence with ): , which double-checks your step in red.
OK, then I think the idea is this:
P(Z_N|N=k)=P(Z_k|N=k)=P(Z_k), where the last equality is true only if we assume the independence between N and those Z_i, and in this case, we can drop the condition N=k in the last step.
But here we have something more complicated: Var[(NT-(X1+X2+...XN))|N]. There is NT in it, it is conditional on N, and NT & N are not independent (even though the X_i's and N are independent), then how can we drop the condition N in this case?
1) Why are the r.v.'s (T-X_i) i.i.d. uniform[0,T] ? How do you know this?NB: you could also say that and the r.v.'s are i.i.d. uniform on again (we performed a symmetry, hence they have same law as the 's. Thus (again using the independence with ): , which double-checks your step in red.
2) Why does this imply that ?
Could you please explain a little more on these?
Thank you! I am learning a lot from you
I liked your explanation in your first post: conditionally to N, NT is an additive constant hence it doesn't affect the variance. Writing it in your way above: for any Z, , and for any measure since kT is a constant.
1) It is a symmetry of the distribution. You would agree that if X is Bernoulli of parameter 1/2, then 1-X is also Bernoulli of parameter 1/2 (it is like switching between tails and heads). This is almost the same. You can prove it from the distribution function for instance: since 0<X<T, you also have 0<T-X<T, and for any 0<t<T, P(T-X<t)=P(X>T-t)=P(T-t<X<T)=t, which is the distribution function of a uniform distribution on [0,T].1) Why are the r.v.'s (T-X_i) i.i.d. uniform[0,T] ? How do you know this?
2) Why does this imply that ?
2) For any k, since X_1,...,X_k are independent, T-X_1,..., T-X_k still are independent. And they have same law as X_1,...,X_k because of 1). So that (X_1,...,X_k) has same joint distribution as (T-X_1,...,T-X_k). In particular, their sums have same distributions, and therefore same variance. Hence:
Var(NT-(X_1+...+X_N)|N=k)
= Var(kT-(X_1+...+X_k)) (using independence between X_1,... and N)
= Var((T-X_1)+...+(T-X_k))
= Var(X_1+...+X_k) (because of the above-mentioned equality in distribution).
But these points are useless after your clever remark about Var(Z+b)=Var(Z) That was just a little remark.
Well, you always have (using your argument). Then you always have, for all :
.
(where )
Therefore, the conclusion still holds as soon as for all , and for all .
For instance, this holds if are independent conditionally to and ; this is weaker, but it is not easy to find an example where this could apply...
Let's consider a similar problem in which we compute the expectation rather than the variance.
I was looking in my textbook, and it says:
Definition:
Let Xo,X1,X2,... be random variables and N E{0,1,2,...} be a counting random variable. If {N=n} depends only on Xo,X1,...,Xn, then we call N a "stopping time" for the sequence. (note: {N≤n} can be used)
Wald's equation:
Let Xo=0, and X1,X2,... be i.i.d. with mean E(X1). Let N be a "stopping time". Then E(∑Xn)=E(X1)E(N) where the sum is from n=0 to n=N.
The result is exactly the same as in the case if we assume that N is independent of the Xi's, so the above discussion says that the assumption can be weakened?
I also don't understand the idea of a "stopping time" as defined. What does it mean by "{N=n} depends only on Xo,X1,...,Xn"?
Thank you!
Note that there is no conditionning by N here, hence this is not really a generalization of .
A stopping time is what the name says: it is a time when you can decide to stop. In other words, suppose you discover the values X0,X1,... one after each other; then in order to stop at time n (i.e. to decide whether N=n), you can only look at the values X0,...Xn, not at the "future".I also don't understand the idea of a "stopping time" as defined. What does it mean by "{N=n} depends only on Xo,X1,...,Xn"?
For instance, is a stopping time: you can stop at time by waiting until a value exceeds 5.
On the other hand, is not a stopping time because you have to look at X0,...,X10 before you know where you should have stopped.
If you think about it, you'll see that the condition of "being able to stop at time N" is equivalent to "for all n, the event can be expressed in terms of X0,...,Xn".
The usual formal definition of a stopping time uses sigma-algebras (filtrations): for all , where is the -algebra generated by .
1) By saying that {N=n} depends only on Xo,X1,...,Xn, does it mean that N is a function of Xo,X1,...,Xn only? (i.e. N=N(Xo,X1,...,Xn)? Is n equal to N here?)
2) "If {N=n} depends only on Xo,X1,...,Xn, then we call N a stopping time for the sequence." <-----here, does this have to be true for ALL n=0,1,2,... in order for N to be a stopping time?
3) In the definition, it says that "If {N=n} depends only on Xo,X1,...,Xn, then we call N a stopping time for the sequence. (note: {N≤n} can also be used.)" <------Why can {N≤n} also be used?
Thanks for explaining!
What would that even mean? No, it means that the event {N=n} can be expressed by conditions on X0,...,Xn, like . Or, equivalently, : the indicator function of the event {N=n} is a function of X0,...Xn only.
And so for every n.
You could have figured this out: since , if and depend on X0,...Xn (resp. on X0,...,Xn-1), then both depend on X0,...,Xn, and therefore their difference as well.3) In the definition, it says that "If {N=n} depends only on Xo,X1,...,Xn, then we call N a stopping time for the sequence. (note: {N≤n} can also be used.)" <------Why can {N≤n} also be used?
Thanks Laurent, this clarifies the idea of a stopping time.
But now I have some concern about the original problem, actually the following is the original context that leads to the above problem. Thinking about the problem for a second time is making me feeling puzzled...
Let {N(t): t≥0} be a Poisson process of rate λ. The points are to be thought of as being the arrival times of customers to a store which opens at time t=T. The customers arriving between t=0 and t=T have to wait until the store opens. Let Y be the total times that these customers have to wait. Calculate Var(Y).
N(T)=N
N(T)~Poisson(λT)
(T_1,T_2,...,T_N) is equal in joint distribution to (X_(1),X_(2),...,X_(N)), where the order statistics are coming from X_1,X_2,...,X_N which are i.i.d. uniform(0,T).
=> T_1+T_2+...+T_N is equal in distribution to X_(1)+X_(2)+...+X_(N) = X_1+X_2+...+X_N
Var(Y)=Var(total waiting time)
=Var[(T-T1)+(T-T2)+...+(T-TN)]
=Var(NT-X_1-X_2-...-X_N)
=E[Var(NT-X_1-X_2-...-X_N |N)] + Var[E(NT-X_1-X_2-...-X_N |N)]
and the red part leads to my original problem in the top post.
But here are the N and the Xi's really independent? The problem is that I don't think the time of occurence of the points in a Poisson process & the number of points in [0,T] are independent. Are they? But if the N and the Xi's are not independent, I have no idea how to continue with the calcualtions and compute Var(Y).
Do you have any insights about this?
Thanks for your help!
The theorem is: is equal in joint distribution to , where is a Poisson random variable of parameter , and are the ordered statistics of the first r.v.'s of the sequence , which is a family of independent uniformly distributed r.v.'s on , independent of .
Then depends on , but doesn't. Since , you can reduce to independent random variables that are independent of and do what you were doing first.