Hey Mick.
Can you show that the likelihood can be factorized to show sufficiency first?
Also if you show that the MLE estimator is the best estimator in combination with that result then you have the proof.
So I have a random sample X_{1}, X_{2}, ... , X_{n} from a Uniform distribution (-theta, theta).
T(X)=max{X_{(n)}, -X_{(1)}} where X_{(n)}=max{X_{1},...,X_{n}} and X_{(1)}=min{X_{1},...,X_{n}}.
I need to show that T(X) is a minimal sufficient statistic.
So this is how I've started:
Firstly I have to get the sampling distribution of T(X), f_{T}(t). (when I use less than or greater than, I also mean or equal to but for neatness typing I'll leave them out)
P[T(X)<u]
=P[max{X_{(n)},-X_{(1)}}<u]
As it's the max being less than u, the other one will also be less than u, and as the events are independent this can be written as:
P[X_{(n)}<u]P[-X_{(1)}<u]
=P[max{X_{1},...,X_{n}}<u]P[min{X_{1},...,X_{n}}>-u]
From here I'm a bit unsure.
So from this position I'm guessing I need the pdf's of the order statistics, which can be derived from this formula:
f_{X(i)}(u)=(n!)/((i-1)!(n-1)!)*[F(u)]^{i-1}*[1-F(u)]^{n-1}*f(u)
So I have f_{X(1)}(u)=(n!)/(n-1)!*[1-F(u)]^{n-1}*f(u)
and f_{X(n)}(u)=(n!)/(n-1)!*[F(u)]^{n-1}*f(u).
In this case f(u)=1/2theta, from the uniform distribution, and so F(u)=0 for x<-theta, x/2theta for -theta<x<theta, 1 for x>theta. I can substitute these in to the above to get equations for f_{X(1)}(u) and f_{X(n)}(u).
Is this all right? Where do I proceed from here?
Thanks
To be honest I probably titled this thread badly, as it's not really the sufficiency I'm stuck on, more the sampling distribution of T(X)!
Once I've got the sampling distribution of T(X), I should be able to take the likelihood of this, and show that T(X) is minimal sufficient by the lemma: T(X) is minimal sufficient if and only if the likelihood functions of x and y are directly proportional, for all x and y in the sample space.
So going back to the distribution, am I along the right lines?
Thanks.
For this statistics are you trying to get the test-statistic for theta, or have you designed a specific test statistic to get T(x)?
Have you tried considering deriving either sampling distribution from the order statistic of the maximum (look at P(|X| < a) instead of P(X < a))? Note that P(|X| < a) is the same as P(-a < X < a).
The way I went was:
P[T(X)<u]
=P[max{X_{(n)}, -X_{(1)}}<u]
=P[max{X_{1}, X_{2},....,X_{n}}<u] P[min{-X_{1}, -X_{2},...,-X_{n}}<u]
=P[X_{1}<u]P[X_{2}<u]...P[X_{n}<u] P[-X_{1}<u]P[-X_{2}<u]...P[-X_{n}<u]
= product to n P[X_{i}<u] multiplied by the product to n P[-X_{i}<u]
=product to n (integral from -theta to theta:1/2theta dx_{i}) multiplied by the product to n (integral from -theta to theta: -1/2theta dx_{i})
=product to n [x_{i}/2theta]^{theta}_{-theta} multiplied by the product to n [[-x_{i}/2theta]^{theta}_{-theta}
=product to n (theta/2theta + theta/2theta) multiplied by product to n (-theta/2theta +theta/2theta)
= product to n of 1 multiplied by product to n of zero
Clearly I've gone wrong somewhere. This is the style of approach we have to do I think, can you see where or what I've done wrong?
I now understand what you meant about using the absolute value probability! I've looked at this some more and got:
P[T<u]=P[max{X_{(n)}, X_{(1)}}<u]
= P[X_{(n)}<u, -X_{(1)}<u]
= P[-u<X_{(1)}<X_{(n)}<u]
= {P[-u<X_{1}<u]}^{n}
I derived P[-u<X<u]= P[X<u]-P[X<-u]
= (u/2theta + theta/2theta) - (-u/2theta + theta/2theta) = u/theta.
So to the power of n gives u^{n}/theta^{n}
So that's F(u), and so f(u)= n*u^{n-1}/theta^{n} for 0<u<theta, and zero otherwise.
Does that sampling distribution look right? I may have made an elementary mistake somewhere but it looks ok I think!
So now I need the likelihood of this, which I'm not so confident about:
the product from i=1 to n of (n*u_{i}^{n-1}/theta^{n})
= (n^{n}*product(u_{i}^{n-1})/theta^{n2}
I'm not sure this is right. And I'm not sure how this will show that T(X) is a minimal sufficent statistic, via the theorem that for samples X and Y, L(x;theta)/L(y; theta) will not depend on theta. I know how this theorem works, but I can't see where this specific T(X) comes in.
Any help appreciated, thanks