Results 1 to 6 of 6

Math Help - Statistics: Consistent Estimators

  1. #1
    Senior Member
    Joined
    Jan 2009
    Posts
    404

    Statistics: Consistent Estimators

    1) Theorem:
    An asymptotically unbiased estimator 'theta hat' for 'theta' is a consistent estimator of 'theta' IF
    lim Var(theta hat) = 0
    n->inf

    Now my question is, if the limit is NOT zero, can we conclude that the estimator is NOT consistent? (i.e. is the theorem actually "if and only if", or is the theorem just one way?)



    2)
    I'm OK with part a, but I am stuck badly in part b. The only theorem I learned about consistency is the one above. Using the theorem, how can we prove consistency or inconsistency of each of the two estimators? I am having trouble computing and simplifying the variances...


    Thank you for your help!
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    It's funny.
    I just covered this in my lecture today.
    This is a lame theorem.
    Consistency, actually Weak Consistency is just convergence in Probability.
    Strong Consistency is almost sure convergence.
    (This is what I do for a living. Study the difference between these two.
    For example the St Peterburg game.)
    The answer to your first question is NO.
    This lame theorem is just Chebyshev's Inequality.
    So if the variance goes to zero then theta hat will approach theta in the weak sense.

    Now in your two examples, the first one is on page 452 of Wackerly,
    which I went over today.
    And since the 4th moment is finite S^2 converges in probability to sigma^2.
    But you can get a stronger result, I'm just using your theorem.

    The one second one has NO limit.
    X_1-X_2 is a random variable, you can easily get it's distribution.
    There's no sequence here.
    So it has no limit.
    MOST things do not have a limit.
    X_1-X_2 is a Normal with mean 0 and variance 2 sigma^2, by indep,
    Then (X_1-X_2)^2 is a mulltiple of a chi-square.
    Dividing it by two only means we can get its exact distribution.
    It has a spread, it will not converge to anything.

    ------------------------------------------------------------------------------------------

    Here you go.
    Convergence of random variables - Wikipedia, the free encyclopedia
    I deal with convergence in law/distribution a little bit, for example the central limit theorem.
    But mostly I look at convergence in probability and almost sure convergence.
    I hardly look at L_p convergence, or convergence in mean.

    -----------------------------------------------------------------------------------

    You confused me a bit
    NO it is not IFF
    BUT
    Now my question is, if the limit is NOT zero, can we conclude that the estimator is NOT consistent? (i.e. is the theorem actually "if and only if", or is the theorem just one way?)
    is confusing
    The converse is false here.
    BUT the CONTRAPOSITIVE is always true
    A implies B
    means Not B implies Not A
    So yes, if the variance does not go to zero it is not consistent.
    BUT that's what I showed.
    the (X_1-X_2)^2/2 is a rv it will have spread, hence it variance does NOT
    go to zero.

    so no and yes to your question.
    It is one way, but you mixed up converse and contrapositive.
    Last edited by mr fantastic; February 22nd 2009 at 02:04 AM. Reason: Merged posts
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Senior Member
    Joined
    Jan 2009
    Posts
    404
    1) An asymptotically unbiased estimator 'theta hat' for 'theta' is a consistent estimator of 'theta' IF
    lim Var(theta hat) = 0
    n->inf

    The contrapostive is:
    If 'theta hat' is NOT consistent, then the limit is NOT equal to zero.

    The question I was asking is:
    If the limit is NOT equal to zero, is it true that 'theta hat' is NOT consistent?


    2) Var(aX+b) = a^2 Var(X)
    So the variance of the first estimator is [1/(n-1)^2]Var[...] where ... is the summation stuff. I am stuck right here. How can we calculate Var[...]? The terms are not even independent...

    For the variance of the second estimator, we have (1/4)Var[(X1-X2)^2]. So the limit as n->inf gives the same thing. Is this estimator consistent or not?


    Thanks!
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    yes to your 1, the second stats is NOT consistent since the variance does not go to zero.

    As for the sample variance, you CAN solve this via chebyshev's BUT that is silly.
    You will need the fourth moment, second moment of your square,
    which is finite in the normal case.
    And that's why they are using the normal distribution.
    BUT that is silly.
    The sample variance is always consistent for the population variance.
    And that includes the bias estimator, where we divide by n and not n-1.

    It's so much easier to state that

    S^2= (sum X_i^2-nXbar^2)/(n-1)
    this is the short cut formula and xbar is the average of the X_i's.

    =(sum (X_i)^2/n)(n/n-1) - (n/n-1) (xbar)^2

    Now by the law of large numbers
    sum (X_i)^2/n goes to E((X_1)^2)=(mu)^2+sigma^2
    and xbar goes to mu
    so (xbar)^2 goes to (mu)^2
    and n/n-1 goes to 1

    PUT that all together and we get
    (mu)^2+(sigma)^2-(mu)^2
    =(sigma)^2

    AND note that normality is NOT needed
    I just proved that S^2 is consistent
    as long as Sigma is finite
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Senior Member
    Joined
    Jan 2009
    Posts
    404
    1) I've seen the proof for the case of the theorem as stated.
    Let A=P(|theta hat - theta|>epsilon) and B=Var(theta hat)/epsilon^2
    At the end of the proof we have 0<A<B and if V(theta hat)->0 as n->inf, then B->0, so by squeeze theorem A->0 which proves convergence in probability (i.e. proves consistency).

    I tried to modfiy the proof for the converse, but failed. For the case that lim V(theta hat) is not equal to zero, it SEEMS to me that (by looking at the above proof and modifying the last step) the estimator can be consistent or inconsistent (i.e. the theorem is inconclusive) since A may tend to zero or it may not, so we can't say for sure.

    How can we prove rigorously that "for an unbiased estimator, if its variance does not tend to zero, then it's not a consistent estimator." Are you sure this is a true statement?

    Thanks!
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor matheagle's Avatar
    Joined
    Feb 2009
    Posts
    2,763
    Thanks
    5
    Look, you need to understand what a rv is.
    If the variance does NOT go to zero it is not coverging to a CONSTANT.
    Theta is a constant.
    It is unknown, but it is NOT a rv.
    Hence for something, namely, theta hat to converge to it,
    the distribution of theta hat must become a degenerate one.
    That means theta hat can not take on more than one value.
    (taking on only one value P(X=23)=1 means V(X)=0)
    AND (X_1-X_2)^2/2 is NOT a sequence it is a random variable.
    It cannot converge to didly.
    There's no n in it like {a_n}.
    So if the variance does not go to zero, the 'sequence' cannot be converging to anything, didly included.
    BUT once again, there is no limit here, no n's
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Need to show that Ax=w is consistent
    Posted in the Advanced Algebra Forum
    Replies: 3
    Last Post: September 18th 2011, 05:20 PM
  2. Consistent Estimator
    Posted in the Advanced Statistics Forum
    Replies: 5
    Last Post: May 7th 2011, 11:01 PM
  3. consistent property
    Posted in the Differential Geometry Forum
    Replies: 3
    Last Post: December 27th 2010, 04:10 PM
  4. When a blue estimator is consistent?
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: April 10th 2010, 12:39 PM
  5. Consistent Estimator
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: May 5th 2008, 02:34 PM

Search Tags


/mathhelpforum @mathhelpforum