Results 1 to 4 of 4

Math Help - Parameter Estimation Proof

  1. #1
    Newbie
    Joined
    Jul 2009
    Posts
    3

    Question Parameter Estimation Proof

    Hi there

    Im trying to do a proof from a past paper at uni, but I'm not sure how to go about it. The proof is this below:

    Prove, for any smooth probability density function f(x|\theta ), that:

    \hat{\theta} D\to N({\theta}_{0}; \frac{1}{nI({\theta}_{0})})

    (The D is meant to be on the arrow but I dont know how to get it there, think it means converges in distribution)

    as n \to \infty

    where {\theta}_{0} is the true value of the parameter \theta .
    Hint: You may use the fact that the expectation of the score function is zero.

    Its very similar to a proof from our textbook and i sort of understand the logic, but I'm not sure how to set it out for a normal distribution. I can prove that under smoothness the MLE of theta hat is consistent, which seems quite similar.. except that its as P\to f(x|\theta ) but I cant quite figure out how the two are related :/

    I think I have to take MLE of the normal and use the weak law of large numbers to get the expected value of it and then do some manipulation, but i cant figure out where the Fisher information comes into play.
    Im pretty hopeless at stats :/ Help would be much appreciated. Sorry if my post seems confusing, I'm quite new here, will be happy to clarify anything.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Member
    Joined
    May 2011
    From
    Sacramento, CA
    Posts
    165
    Put [tex]...[/tex] around your latex for it to be interpreted by MHF.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Senior Member
    Joined
    Oct 2009
    Posts
    340
    Quote Originally Posted by Shadowkin View Post
    Hi there

    Im trying to do a proof from a past paper at uni, but I'm not sure how to go about it. The proof is this below:

    Prove, for any smooth probability density function f(x|\theta ), that:

    \hat{\theta} D\to   N({\theta}_{0}; \frac{1}{nI({\theta}_{0})})

    (The D is meant to be on the arrow but I dont know how to get it there, think it means converges in distribution)

    as n \to \infty

    where {\theta}_{0} is the true value of the parameter \theta .
    Hint: You may use the fact that the expectation of the score function is zero.

    Its very similar to a proof from our textbook and i sort of understand the logic, but I'm not sure how to set it out for a normal distribution. I can prove that under smoothness the MLE of theta hat is consistent, which seems quite similar.. except that its as P\to f(x|\theta ) but I cant quite figure out how the two are related :/

    I think I have to take MLE of the normal and use the weak law of large numbers to get the expected value of it and then do some manipulation, but i cant figure out where the Fisher information comes into play.
    Im pretty hopeless at stats :/ Help would be much appreciated. Sorry if my post seems confusing, I'm quite new here, will be happy to clarify anything.
    Ugh, such a sloppily worded question. The fact that the MLE of theta hat is consistent, which you can prove to yourself, makes it obvious that what is asserted in the question statement is FALSE: \hat \theta converges in distribution to a POINT MASS at \theta_0.

    What they mean, of course, is that \sqrt n (\hat \theta - \theta_0) \to N(0, I^{-1}(\theta_0)). The idea is to first show that the score function evaluated at \theta_0 converges by the CLT to N(0, I(\theta)) and then do a Taylor expansion on the score function about \theta_0. This of course completely ignores how you deal with the remainder term of the Taylor expansion, which is the hard part of the problem, but hey, who cares about trifling things like that?

    I'm like 90% sure you need a lot more than smoothness to get this result to go through, if you want to do it rigorously. You need a lot of stuff to deal with the remainder term of the Taylor expansion. Even in Casella and Burger they just hand-wave their way through dealing with the remainder term (though they at least mention the need for a lot of regularity conditions).
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Newbie
    Joined
    Jul 2009
    Posts
    3
    Thanks for all the help guys, you'll really pushed me to thinking in the right direction. Was quite a tricky 12 mark proof, i.e. 12% of the paper.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Parameter estimation
    Posted in the Advanced Statistics Forum
    Replies: 4
    Last Post: April 18th 2011, 10:55 AM
  2. estimation of parameter
    Posted in the Advanced Statistics Forum
    Replies: 8
    Last Post: December 15th 2010, 12:17 PM
  3. Parameter Estimation
    Posted in the Advanced Applied Math Forum
    Replies: 2
    Last Post: December 1st 2009, 09:35 AM
  4. parameter estimation
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: December 1st 2008, 07:49 AM
  5. Parameter estimation
    Posted in the Calculus Forum
    Replies: 4
    Last Post: November 23rd 2007, 07:13 AM

Search Tags


/mathhelpforum @mathhelpforum