Results 1 to 6 of 6

Math Help - Taking the partial derivative of a CDF?

  1. #1
    Newbie
    Joined
    Aug 2010
    Posts
    5

    Taking the partial derivative of a CDF?

    I know the derivative of a CDF is the PDF, but what if you want to take a partial derivative. For example, I have a standard normal difference variable

    Y = (Xi-Xj)/(sqrt(Var(Xi) + Var(Xj)))

    I would like to take the derivative with respect to Xi. I am thinking this just invokes the chain rule and would give:

    PDF(Y) * (1/sqrt(VarI + VarJ))

    Is that correct?
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Grand Panjandrum
    Joined
    Nov 2005
    From
    someplace
    Posts
    14,972
    Thanks
    4
    Quote Originally Posted by daaaaave View Post
    I know the derivative of a CDF is the PDF, but what if you want to take a partial derivative. For example, I have a standard normal difference variable

    Y = (Xi-Xj)/(sqrt(Var(Xi) + Var(Xj)))

    I would like to take the derivative with respect to Xi. I am thinking this just invokes the chain rule and would give:

    PDF(Y) * (1/sqrt(VarI + VarJ))

    Is that correct?
    Perhaps if you post the original question that might help

    CB
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Newbie
    Joined
    Aug 2010
    Posts
    5
    It's not homework or anything. I'm just deriving something for my own use. I'm just unsure if this is the correct way to take the partial derivative of the cdf for a normal random variable.

    I'm doing a maximum likelihood where my parameters are the X's and variances, if that helps.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Grand Panjandrum
    Joined
    Nov 2005
    From
    someplace
    Posts
    14,972
    Thanks
    4
    Quote Originally Posted by daaaaave View Post
    It's not homework or anything. I'm just deriving something for my own use. I'm just unsure if this is the correct way to take the partial derivative of the cdf for a normal random variable.

    I'm doing a maximum likelihood where my parameters are the X's and variances, if that helps.
    The likelihood is a function of the data and the parameters of the problem and maximisation is with respect to the parameters treating the data as fixed.

    I do not see that formulation here. You are not distinguishing clearly between the parameters and data.

    CB
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Newbie
    Joined
    Aug 2010
    Posts
    5
    Ok, so let me try to explain fully what I'm attempting to do.

    I am assuming there are N normal random variables with parameters X (mean) and Var. Let's say we take 1 observation from 2 of these random variables at a time and are told which RV was greater but not their value. Then, I believe the likelihood function is:

    f(Q | X, Var) = the product of P(A > B) where A is the sample from which the greater observation was drawn

    The data (Q) just tells us if we are using P(A > B) or P(B > A) in the likelihood function for that observation.

    Then, I take the log and try to maximize with respect to the parameters (X, Var) for each random variable (A, B, .....). Since P(A > B) is just a standard normal difference CDF, I am taking the derivative of that.

    Does that make sense?
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Grand Panjandrum
    Joined
    Nov 2005
    From
    someplace
    Posts
    14,972
    Thanks
    4
    Quote Originally Posted by daaaaave View Post
    Ok, so let me try to explain fully what I'm attempting to do.

    I am assuming there are N normal random variables with parameters X (mean) and Var. Let's say we take 1 observation from 2 of these random variables at a time and are told which RV was greater but not their value. Then, I believe the likelihood function is:

    f(Q | X, Var) = the product of P(A > B) where A is the sample from which the greater observation was drawn

    The data (Q) just tells us if we are using P(A > B) or P(B > A) in the likelihood function for that observation.

    Then, I take the log and try to maximize with respect to the parameters (X, Var) for each random variable (A, B, .....). Since P(A > B) is just a standard normal difference CDF, I am taking the derivative of that.

    Does that make sense?
    No not like that, it might but that is still difficult to follow. It would be best to find some way of writing out the likelihood explicitly.

    Also there will be no unique maximum of the likelihood since as far as I can see:

    The likelihood for (\bold{X},\bold{V}) is the same as that or (\bold{X}+c,\bold{V}) and (\lambda \bold{X}, \sqrt{\lambda}\bold{V}), where \bold{X} and \bold{V} are the means and variances of the $$ N presumably independent RVs, and $$ c and $$ \lambda are scalars
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. [SOLVED] Taking a partial derivative with respect to a model
    Posted in the Advanced Applied Math Forum
    Replies: 2
    Last Post: January 14th 2011, 02:25 PM
  2. Replies: 2
    Last Post: April 21st 2010, 04:51 PM
  3. Taking The Derivative
    Posted in the Calculus Forum
    Replies: 9
    Last Post: November 18th 2009, 01:49 PM
  4. Replies: 2
    Last Post: November 4th 2009, 07:23 PM
  5. Taking the partial derivative
    Posted in the Calculus Forum
    Replies: 2
    Last Post: October 20th 2007, 06:20 PM

Search Tags


/mathhelpforum @mathhelpforum