1. ## Minimal Sufficient Statistic

So I have a random sample X1, X2, ... , Xn from a Uniform distribution (-theta, theta).
T(X)=max{X(n), -X(1)} where X(n)=max{X1,...,Xn} and X(1)=min{X1,...,Xn}.
I need to show that T(X) is a minimal sufficient statistic.

So this is how I've started:

Firstly I have to get the sampling distribution of T(X), fT(t). (when I use less than or greater than, I also mean or equal to but for neatness typing I'll leave them out)
P[T(X)<u]
=P[max{X(n),-X(1)}<u]
As it's the max being less than u, the other one will also be less than u, and as the events are independent this can be written as:
P[X(n)<u]P[-X(1)<u]
=P[max{X1,...,Xn}<u]P[min{X1,...,Xn}>-u]

From here I'm a bit unsure.
So from this position I'm guessing I need the pdf's of the order statistics, which can be derived from this formula:
fX(i)(u)=(n!)/((i-1)!(n-1)!)*[F(u)]i-1*[1-F(u)]n-1*f(u)

So I have fX(1)(u)=(n!)/(n-1)!*[1-F(u)]n-1*f(u)
and fX(n)(u)=(n!)/(n-1)!*[F(u)]n-1*f(u).

In this case f(u)=1/2theta, from the uniform distribution, and so F(u)=0 for x<-theta, x/2theta for -theta<x<theta, 1 for x>theta. I can substitute these in to the above to get equations for fX(1)(u) and fX(n)(u).

Is this all right? Where do I proceed from here?

Thanks

2. ## Re: Minimal Sufficient Statistic

Hey Mick.

Can you show that the likelihood can be factorized to show sufficiency first?

Also if you show that the MLE estimator is the best estimator in combination with that result then you have the proof.

3. ## Re: Minimal Sufficient Statistic

To be honest I probably titled this thread badly, as it's not really the sufficiency I'm stuck on, more the sampling distribution of T(X)!

Once I've got the sampling distribution of T(X), I should be able to take the likelihood of this, and show that T(X) is minimal sufficient by the lemma: T(X) is minimal sufficient if and only if the likelihood functions of x and y are directly proportional, for all x and y in the sample space.

So going back to the distribution, am I along the right lines?

Thanks.

4. ## Re: Minimal Sufficient Statistic

For this statistics are you trying to get the test-statistic for theta, or have you designed a specific test statistic to get T(x)?

Have you tried considering deriving either sampling distribution from the order statistic of the maximum (look at P(|X| < a) instead of P(X < a))? Note that P(|X| < a) is the same as P(-a < X < a).

5. ## Re: Minimal Sufficient Statistic

The way I went was:
P[T(X)<u]
=P[max{X(n), -X(1)}<u]
=P[max{X1, X2,....,Xn}<u] P[min{-X1, -X2,...,-Xn}<u]
=P[X1<u]P[X2<u]...P[Xn<u] P[-X1<u]P[-X2<u]...P[-Xn<u]
= product to n P[Xi<u] multiplied by the product to n P[-Xi<u]
=product to n (integral from -theta to theta:1/2theta dxi) multiplied by the product to n (integral from -theta to theta: -1/2theta dxi)
=product to n [xi/2theta]theta-theta multiplied by the product to n [[-xi/2theta]theta-theta
=product to n (theta/2theta + theta/2theta) multiplied by product to n (-theta/2theta +theta/2theta)
= product to n of 1 multiplied by product to n of zero

Clearly I've gone wrong somewhere. This is the style of approach we have to do I think, can you see where or what I've done wrong?

6. ## Re: Minimal Sufficient Statistic

I now understand what you meant about using the absolute value probability! I've looked at this some more and got:

P[T<u]=P[max{X(n), X(1)}<u]
= P[X(n)<u, -X(1)<u]
= P[-u<X(1)<X(n)<u]
= {P[-u<X1<u]}n

I derived P[-u<X<u]= P[X<u]-P[X<-u]
= (u/2theta + theta/2theta) - (-u/2theta + theta/2theta) = u/theta.

So to the power of n gives un/thetan

So that's F(u), and so f(u)= n*un-1/thetan for 0<u<theta, and zero otherwise.

Does that sampling distribution look right? I may have made an elementary mistake somewhere but it looks ok I think!

So now I need the likelihood of this, which I'm not so confident about:

the product from i=1 to n of (n*uin-1/thetan)
= (nn*product(uin-1)/thetan2

I'm not sure this is right. And I'm not sure how this will show that T(X) is a minimal sufficent statistic, via the theorem that for samples X and Y, L(x;theta)/L(y; theta) will not depend on theta. I know how this theorem works, but I can't see where this specific T(X) comes in.

Any help appreciated, thanks