Results 1 to 4 of 4

Math Help - trying to translate MLE formula into words

  1. #1
    Senior Member
    Joined
    Nov 2010
    From
    Hong Kong
    Posts
    255

    trying to translate MLE formula into words

    What is the right narrative (ie in words, not in symbols) of this definition of a maximum likelihood estimator (Penzer, LSE):

    L_Y(\theta; y)=sup_{\theta\in\Theta}L_Y(\theta,Y)

    Let me try.

    We have a sample Y=Y_1,...Y_n. This sample is parametrised by a parameter \theta=(\theta_1, ... \theta_n) which can take values in a parameter space \Theta.

    Then we have a set of all possible likelihood estimators given that sample and that parameter. Then the maximum likelihood estimator MLE \theta"hat" is the least upper bound of this set of all possible likelihood estimators.

    is that right? I want to check that I understand the notation fully.

    Also, is this the same as another definition of MLE (Casella and Berger): theta hat is a parameter value at which the likelihood function attains its maximum as a function of theta. Why is here a clear cut 'maximum' while in the above definition it is a 'supremum' - does that mean that there is a possibility that this value is not a part of the set of all likelihood estimators?
    Last edited by Volga; February 13th 2011 at 05:00 AM.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Grand Panjandrum
    Joined
    Nov 2005
    From
    someplace
    Posts
    14,972
    Thanks
    4
    Quote Originally Posted by Volga View Post
    What is the right narrative (ie in words, not in symbols) of this definition of a maximum likelihood estimator (Penzer, LSE):

    L_Y(\theta; y)=sup_{\theta\in\Theta}L_Y(\theta,Y)

    Let me try.

    We have a sample Y=Y_1,...Y_n. This sample is parametrised by a parameter \theta=(\theta_1, ... \theta_n) which can take values in a parameter space \Theta.

    Then we have a set of all possible likelihood estimators given that sample and that parameter. Then the maximum likelihood estimator MLE \theta"hat" is the least upper bound of this set of all possible likelihood estimators.

    is that right? I want to check that I understand the notation fully.

    Also, is this the same as another definition of MLE (Casella and Berger): theta hat is a parameter value at which the likelihood function attains its maximum as a function of theta. Why is here a clear cut 'maximum' while in the above definition it is a 'supremum' - does that mean that there is a possibility that this value is not a part of the set of all likelihood estimators?
    If the likelihood does not attain its maximum on the space of allowable parameter values there is no maximum likelihood estimator.

    CB
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Senior Member
    Joined
    Nov 2010
    From
    Hong Kong
    Posts
    255
    Makes perfect sense! thank you, CB!
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Senior Member
    Joined
    Oct 2009
    Posts
    340
    Yeah, I would stick with Casella (who, incidentally, I am taking a course from right now) and Burger on that. Existence of MLE's is one reason why I prefer to work with closed sample and parameter spaces.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Replies: 7
    Last Post: September 13th 2010, 12:42 AM
  2. Predicate logic - Translate?
    Posted in the Discrete Math Forum
    Replies: 4
    Last Post: March 11th 2009, 09:33 AM
  3. Please help translate these arguments.....
    Posted in the Discrete Math Forum
    Replies: 2
    Last Post: October 21st 2007, 12:07 PM
  4. translate
    Posted in the Algebra Forum
    Replies: 1
    Last Post: April 22nd 2006, 05:52 PM
  5. translate and simplify
    Posted in the Algebra Forum
    Replies: 1
    Last Post: July 12th 2005, 09:00 PM

Search Tags


/mathhelpforum @mathhelpforum