Results 1 to 9 of 9

Math Help - Implicit vs. partial

  1. #1
    MHF Contributor Mathstud28's Avatar
    Joined
    Mar 2008
    From
    Pennsylvania
    Posts
    3,641

    Implicit vs. partial

    Ok, I know the answer is probably no, and I feel stupid asking it, but say we have an equation (I can't say function for reasons that will become obvious), lets say

    f(x,y)=g(x,y)

    Now lets say I differentiate it implicitly and solve for y'. that is the derivative at a point.

    Now bear with me if this is really bad but could we do this

    Let z=f(x,y)-g(x,y)

    Then will differentiating implicitly give the same answer as...

    \frac{\frac{\partial{z}}{\partial{x}}}{\frac{\part  ial{z}}{\partial{y}}}?

    Maybe I am just really lucky, but after pondering this and thinking about it analytically I tried a couple of problems and it seems to pan out, but I do not think it is true, otherwise my book would have it or I would have heard of it.


    Can someone please clarify?

    Mathstud.

    EDIT:

    The more I think about it, the more I don't think so.

    The reason is the difference in the chain rule

    \frac{\frac{dy}{dt}}{\frac{dx}{dt}}=\frac{dy}{dx}

    But I am pretty sure

    \frac{\frac{\partial{z}}{\partial{x}}}{\frac{\part  ial{z}}{\partial{y}}}\ne\frac{\partial{y}}{\partia  l{x}}

    Is that right?

    EDIT EDIT:

    I have done some more thinking and a little experimentation and I think that I was right

    but I think that it is actually

    y'=\frac{-\frac{\partial{z}}{\partial{x}}}{\frac{\partial{z}  }{\partial{y}}}
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Super Member Aryth's Avatar
    Joined
    Feb 2007
    From
    USA
    Posts
    652
    Thanks
    2
    Awards
    1
    According to the Implicit Function Theorem:

    \frac{dy}{dx} = -\frac{\frac{\partial{F}}{\partial{x}}}{\frac{\part  ial{F}}{\partial{y}}} = -\frac{F_x}{F_y}

    So, technically:

    -\frac{\frac{\partial{F}}{\partial{x}}}{\frac{\part  ial{F}}{\partial{y}}} = -\frac{F_x}{F_y}

    And therefore:

    \frac{\frac{\partial{F}}{\partial{x}}}{\frac{\part  ial{F}}{\partial{y}}} = \frac{F_x}{F_y}

    Finally:

    \frac{\frac{\partial{F}}{\partial{y}}}{\frac{\part  ial{F}}{\partial{x}}} = \frac{F_y}{F_x}

    So you were right in saying that:

    \frac{\frac{\partial{z}}{\partial{x}}}{\frac{\part  ial{z}}{\partial{y}}}\ne\frac{\partial{y}}{\partia  l{x}}

    As a matter of fact... The best way to do this would have to be to use the Chain Rule.
    Last edited by Aryth; June 29th 2008 at 01:31 AM.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Global Moderator

    Joined
    Nov 2005
    From
    New York City
    Posts
    10,616
    Thanks
    10
    I am not exactly sure what you are asking but as Aryth said it seems to use the implicit function theorem. In its most basic form it says that if F:\mathbb{R}^2\mapsto \mathbb{R} (this notation it means that F is a function of two variables) is a \mathcal{C}^1 function (ignore what this mean - it just means the function is behaved well-enough such as being differenciable). Say that F(a,b) = 0. The implicit function theorem says that if \partial_y F(a,b) \not = 0 then the equation F(x,y) = 0 can be solved uniquely (for y) in the "neighborhood" of (a,b). The term "in the neighborhood" means in some small disk around (a,b) we can solve this equation uniquely for y (in terms of x).

    For example, consider the circle x^2+y^2 - 1= 0. Where can we solve for y uniquely (in a neighborhood)? Define the function F(x,y) = x^2+y^2 - 1. The point (1,0) lies on this circle because F(1,0) = 1^2 + 0^2 - 1 = 0. If we try to solve the equation within the small disk \sqrt{(x-1)^2+y^2} < \epsilon we see that it has two solutions, one in the upper half and one in the lower half (see red circle) - in fact those solutions are y = \pm \sqrt{ 1-x^2}. And this happens no matter how small we make the circle. Therefore, the equation F(x,y) = 0 cannot be solved uniquely. Let us see what happens when we use the implicit function theorem. The theorem says that if \partial_y F(1,0) \not = 0 then we can solve the equation uniquely. Since we cannot solve this equation uniquely it must mean that \partial_y F(1,0) = 0. Check this: \partial_y F(x,y) = 2y and so \partial_y F(1,0) = 0. However, if F(a,b) = 0 and b\not = 0 i.e. we stay away from the break in the graph where it lies above and below the x-axis then \partial_y F(a,b) = 2b \not = 0 and therefore we can solve this equation uniquely. As in the case of the blue circle, the solution is y=\sqrt{1-x^2}.

    Let us say that F(x,y) is function which satisfies the condition of the implicit function theorem as in the first paragraph. Then in the neighborhood of (a,b) (where (a,b) is a solution to F(x,y)=0) we can solve for y uniquely in terms of x. This means we can define a new function y = g(x) in this neighborhood. Remember a function is a set of pairs so that for each first coordinate there is a unique matching coordinate. Therefore, if we let x be the first coordinate then by the theorem there is a unique matching coordinate so that F(x,y) = 0. And this would define a function y=g(x).

    But the implicit function theorem does not stop here, it says more it says that g(x) is itself \mathcal{C}^1 (which is differenciable with some more properties). Now since F(x,y) = 0 \implies F(x,g(x)) = 0. The functions F and g are differenciable and so by the chain rule for multivariable functions we get that (differenciating both sides of F(x,g(x)) = 0) \partial_x F + \partial_y F \cdot g'(x) = 0 and this means g'(x) = - \frac{\partial_x F}{\partial_y F} (the partials are evaluated at (x,y)). And that gives you a formula.

    If you got that then great. There is just one more point to be addressed. If it confuses you just ignore that. The questionable step is dividing by \partial_y F. How do we know it is non-zero? This is where we use two facts. The first one is that \partial_y F (a,b) \not = 0. The second one is that F(x,y) is \mathcal{C}^1. The first fact is not good enough to say that \partial_y F\not =0 for all points around (a,b) because it might be non-zero at (a,b) and yet be zero at some points around (a,b). This is where we need the second fact. The meaning of \mathcal{C}^1 is that the function is differenciable and the derivative is continous. Therefore \partial_y F is a continous function. Since \partial_y (a,b)\not = 0 it means there is a small enough neighborhood so that \partial_y (x,y)\not = 0 for points close to (x,y) by the definition of continuity.
    Attached Thumbnails Attached Thumbnails Implicit vs. partial-conformal.jpg  
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Eater of Worlds
    galactus's Avatar
    Joined
    Jul 2006
    From
    Chaneysville, PA
    Posts
    3,001
    Thanks
    1
    I think I know what you're saying, and yes, you can differentiate implicitly that way. I have used it before. As a matter of fact, I wrote a program for implicit diff using that method.

    An easy-schmeasy example to be sure.

    Say you want to implicitly differentiate x^{3}+y^{3}=1

    The derivative wrt x is \frac{\partial}{\partial{x}}=3x^{2}

    wrt y it is -\frac{\partial}{\partial{x}} = -3y^{2}

    So, by dividing, we get \frac{-x^{2}}{y^{2}}

    Is that what you mean?. I think it is?. It makes sense to do it that way I always thought.
    Follow Math Help Forum on Facebook and Google+

  5. #5
    MHF Contributor Mathstud28's Avatar
    Joined
    Mar 2008
    From
    Pennsylvania
    Posts
    3,641
    Quote Originally Posted by ThePerfectHacker View Post
    I am not exactly sure what you are asking but as Aryth said it seems to use the implicit function theorem. In its most basic form it says that if F:\mathbb{R}^2\mapsto \mathbb{R} (this notation it means that F is a function of two variables) is a \mathcal{C}^1 function (ignore what this mean - it just means the function is behaved well-enough such as being differenciable). Say that F(a,b) = 0. The implicit function theorem says that if \partial_y F(a,b) \not = 0 then the equation F(x,y) = 0 can be solved uniquely (for y) in the "neighborhood" of (a,b). The term "in the neighborhood" means in some small disk around (a,b) we can solve this equation uniquely for y (in terms of x).

    For example, consider the circle x^2+y^2 - 1= 0. Where can we solve for y uniquely (in a neighborhood)? Define the function F(x,y) = x^2+y^2 - 1. The point (1,0) lies on this circle because F(1,0) = 1^2 + 0^2 - 1 = 0. If we try to solve the equation within the small disk \sqrt{(x-1)^2+y^2} < \epsilon we see that it has two solutions, one in the upper half and one in the lower half (see red circle) - in fact those solutions are y = \pm \sqrt{ 1-x^2}. And this happens no matter how small we make the circle. Therefore, the equation F(x,y) = 0 cannot be solved uniquely. Let us see what happens when we use the implicit function theorem. The theorem says that if \partial_y F(1,0) \not = 0 then we can solve the equation uniquely. Since we cannot solve this equation uniquely it must mean that \partial_y F(1,0) = 0. Check this: \partial_y F(x,y) = 2y and so \partial_y F(1,0) = 0. However, if F(a,b) = 0 and b\not = 0 i.e. we stay away from the break in the graph where it lies above and below the x-axis then \partial_y F(a,b) = 2b \not = 0 and therefore we can solve this equation uniquely. As in the case of the blue circle, the solution is y=\sqrt{1-x^2}.

    Let us say that F(x,y) is function which satisfies the condition of the implicit function theorem as in the first paragraph. Then in the neighborhood of (a,b) (where (a,b) is a solution to F(x,y)=0) we can solve for y uniquely in terms of x. This means we can define a new function y = g(x) in this neighborhood. Remember a function is a set of pairs so that for each first coordinate there is a unique matching coordinate. Therefore, if we let x be the first coordinate then by the theorem there is a unique matching coordinate so that F(x,y) = 0. And this would define a function y=g(x).

    But the implicit function theorem does not stop here, it says more it says that g(x) is itself \mathcal{C}^1 (which is differenciable with some more properties). Now since F(x,y) = 0 \implies F(x,g(x)) = 0. The functions F and g are differenciable and so by the chain rule for multivariable functions we get that (differenciating both sides of F(x,g(x)) = 0) \partial_x F + \partial_y F \cdot g'(x) = 0 and this means g'(x) = - \frac{\partial_x F}{\partial_y F} (the partials are evaluated at (x,y)). And that gives you a formula.

    If you got that then great. There is just one more point to be addressed. If it confuses you just ignore that. The questionable step is dividing by \partial_y F. How do we know it is non-zero? This is where we use two facts. The first one is that \partial_y F (a,b) \not = 0. The second one is that F(x,y) is \mathcal{C}^1. The first fact is not good enough to say that \partial_y F\not =0 for all points around (a,b) because it might be non-zero at (a,b) and yet be zero at some points around (a,b). This is where we need the second fact. The meaning of \mathcal{C}^1 is that the function is differenciable and the derivative is continous. Therefore \partial_y F is a continous function. Since \partial_y (a,b)\not = 0 it means there is a small enough neighborhood so that \partial_y (x,y)\not = 0 for points close to (x,y) by the definition of continuity.
    That made perfect sense actually! Thank you very much.

    So for the last paragraph since

    F(x,y) being C^{1} (a smooth function) this implies that F_y is continous which implies that

    \lim_{(x,y)\to(a,b)}F_y=F_y(a,b) from all paths, so this implies that there exists a region containing (a,b) such that F_y\ne{0}.

    We know this because we have that

    \lim_{(x,y)\to(a,b)}F_y(a,b) represents the area immediately surrounding the point )(a,b), or in other words the aforementioned region.

    And since as was said earlier

    \lim_{(x,y)\to(a,b)}F_y(x,y)=F_y(a,b) (due to its continuity)

    and

    F(x,y)\ne{0}

    This implies that

    \lim_{(x,y)\to(a,b)}F_y(x,y)\overbrace{\ne}^{\text  {must}}0


    Which gives us the guarantee that dividing by F_y will not produce a division by zero.

    Is that right?
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor Mathstud28's Avatar
    Joined
    Mar 2008
    From
    Pennsylvania
    Posts
    3,641
    Quote Originally Posted by galactus View Post
    I think I know what you're saying, and yes, you can differentiate implicitly that way. I have used it before. As a matter of fact, I wrote a program for implicit diff using that method.

    An easy-schmeasy example to be sure.

    Say you want to implicitly differentiate x^{3}+y^{3}=1

    The derivative wrt x is \frac{\partial}{\partial{x}}=3x^{2}

    wrt y it is -\frac{\partial}{\partial{x}} = -3y^{2}

    So, by dividing, we get \frac{-x^{2}}{y^{2}}

    Is that what you mean?. I think it is?. It makes sense to do it that way I always thought.
    I was asking both actually, the analytic reason behind and the actual application.

    Thanks Galactus!
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Eater of Worlds
    galactus's Avatar
    Joined
    Jul 2006
    From
    Chaneysville, PA
    Posts
    3,001
    Thanks
    1
    I think PH gave you the in depth of why it works.
    Follow Math Help Forum on Facebook and Google+

  8. #8
    Global Moderator

    Joined
    Nov 2005
    From
    New York City
    Posts
    10,616
    Thanks
    10
    Quote Originally Posted by Mathstud28 View Post
    Is that right?
    Here is what I was thinking when I wrote it. Since \partial_y F is continous at (a,b) it means \lim_{(x,y)\to (a,b)}\partial_y F(x,y) = \partial_y F(a,b). But this means for any \epsilon > 0 there exists \delta > 0 such that if \sqrt{(x-a)^2+(y-b)^2} <\delta \implies |\partial_y F(x,y) - \partial_y F(a,b) | < \epsilon. Rewrite |\partial_y F(x,y) - \partial_y F(a,b) | < \epsilon as  - \epsilon < \partial_y F(x,y) - \partial_y F(a,b) < \epsilon and so \partial_y F(a,b) - \epsilon < \partial_y F(x,y) < \partial_y F(a,b) + \epsilon for all \sqrt{(x-a)^2+(y-b)^2} < \delta. Now \partial_y F(a,b) \not = 0. Thus, either \partial_y F(a,b) > 0 or \partial_y F(a,b) < 0. If \partial_y F(a,b) > 0 choose an \epsilon small enough so that \partial_y F(a,b) - \epsilon > 0. And then \partial_y F(x,y) > 0 (because \partial_y F(a,b) - \epsilon < \partial_y F(x,y) ) for \sqrt{(x-a)^2+(y-b)^2}< \delta (and this is a neighborhood around (a,b)) thus \partial_y F(x,y) \not = 0. Now if \partial_y F(a,b) < 0 choose \epsilon >0 small enough so that \partial_y F(a,b) + \epsilon < 0. It will follow from \partial_y F(x,y) < \partial_y F(a,b) + \epsilon that \partial_y F(x,y) < 0 in this neighborhood and so \partial_y F(x,y) \not = 0.
    Follow Math Help Forum on Facebook and Google+

  9. #9
    MHF Contributor Mathstud28's Avatar
    Joined
    Mar 2008
    From
    Pennsylvania
    Posts
    3,641
    Quote Originally Posted by ThePerfectHacker View Post
    Here is what I was thinking when I wrote it. Since \partial_y F is continous at (a,b) it means \lim_{(x,y)\to (a,b)}\partial_y F(x,y) = \partial_y F(a,b). But this means for any \epsilon > 0 there exists \delta > 0 such that if \sqrt{(x-a)^2+(y-b)^2} <\delta \implies |\partial_y F(x,y) - \partial_y F(a,b) | < \epsilon. Rewrite |\partial_y F(x,y) - \partial_y F(a,b) | < \epsilon as  - \epsilon < \partial_y F(x,y) - \partial_y F(a,b) < \epsilon and so \partial_y F(a,b) - \epsilon < \partial_y F(x,y) < \partial_y F(a,b) + \epsilon for all \sqrt{(x-a)^2+(y-b)^2} < \delta. Now \partial_y F(a,b) \not = 0. Thus, either \partial_y F(a,b) > 0 or \partial_y F(a,b) < 0. If \partial_y F(a,b) > 0 choose an \epsilon small enough so that \partial_y F(a,b) - \epsilon > 0. And then \partial_y F(x,y) > 0 (because \partial_y F(a,b) - \epsilon < \partial_y F(x,y) ) for \sqrt{(x-a)^2+(y-b)^2}< \delta (and this is a neighborhood around (a,b)) thus \partial_y F(x,y) \not = 0. Now if \partial_y F(a,b) < 0 choose \epsilon >0 small enough so that \partial_y F(a,b) + \epsilon < 0. It will follow from \partial_y F(x,y) < \partial_y F(a,b) + \epsilon that \partial_y F(x,y) < 0 in this neighborhood and so \partial_y F(x,y) \not = 0.
    Ok got it! Thanks.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. implicit Partial differentiation
    Posted in the Calculus Forum
    Replies: 2
    Last Post: January 3rd 2012, 05:28 PM
  2. Partial derivatives of implicit function
    Posted in the Calculus Forum
    Replies: 2
    Last Post: January 22nd 2010, 11:58 AM
  3. Partial & Implicit Problem ! Help !
    Posted in the Calculus Forum
    Replies: 3
    Last Post: August 5th 2009, 10:41 PM
  4. Second Order Implicit Partial Derivatives
    Posted in the Calculus Forum
    Replies: 3
    Last Post: June 1st 2009, 01:00 AM
  5. Implicit Partial Deriviative
    Posted in the Calculus Forum
    Replies: 4
    Last Post: April 19th 2008, 04:19 PM

Search Tags


/mathhelpforum @mathhelpforum