Results 1 to 8 of 8

Math Help - Proving a theorem: Changing the Order of Differentiation (large question)

  1. #1
    Member
    Joined
    Jan 2010
    Posts
    232

    Proving a theorem: Changing the Order of Differentiation (large question)

    This question is right out of Taylor & Mann Advanced Calculus, Third Edition, and is a REALLY tricky one. Due to how context-heavy this is, I can't write it word-for-word.

    We're to prove the following theorem:
    Let f(x,y) and its first partial derivatives f_1, f_2 be defined in a neighborhood of the point (a,b), and suppose that f_1 and f_2 are differentiable at that point. Then f_{12}(a,b)=f_{21}(a,b).

    This theorem uses part of another theorem's proof to start things off, before getting to the parts that I can't figure out.

    Let h be a number different from zero such that the point (a+h,b+h) is inside a square having its center at (a,b). We then consider the following expression:
    D=f(a+h,b+h)-f(a+h,b)-f(a,b+h)+f(a,b)
    If we introduce the function
    \phi (x)=f(x,b+h)-f(x,b),
    we can express D in the form
    D=\phi (a+h)-\phi (a) (*)
    Now \phi has the derivative
    \phi '(x)=f_1(x,b+h)-f_1(x,b)
    Hence \phi is continuous, and we may apply the mean-value theorem for derivatives to (*), obtaining the following:
    D=h\phi '(a+\theta_1 h)=h(f_1(a+\theta_1 h,b+h)-f_1(a+\theta_1 h,b)), where 0<\theta_1 <1

    That ends the part of the separate proof; now to parts which I'm in the dark about (these are from the actual question).

    From the fact that f_1 is differentiable at (a,b), one can write
    f_1(a+\theta_1 h,b+h)=f_1(a,b)+f_{11}(a,b)\theta_1 h+f{12}(a,b)h+\epsilon_1 |h|, where \epsilon_1\rightarrow 0 as h\rightarrow 0.
    Explain why this is so.

    Next, go on to explain how to obtain the following expression:
    D=h^2 f_{12}(a,b)+\epsilon |h|h where \epsilon\rightarrow 0 as h\rightarrow 0.

    Explain the derivation of the similar expression
    D=h^2f_{21}(a,b)+\epsilon '|h|h where \epsilon '\rightarrow 0 as h\rightarrow 0,
    using the fact that f_2 is differentiable at (a,b).

    With all this, complete the proof of the theorem.

    ----------

    This is a lot of info to work with, I know. But I can't summarize it any better than this because the question has such a high context requirement.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Grand Panjandrum
    Joined
    Nov 2005
    From
    someplace
    Posts
    14,972
    Thanks
    4
    Quote Originally Posted by Runty View Post
    This question is right out of Taylor & Mann Advanced Calculus, Third Edition, and is a REALLY tricky one. Due to how context-heavy this is, I can't write it word-for-word.

    We're to prove the following theorem:
    Let f(x,y) and its first partial derivatives f_1, f_2 be defined in a neighborhood of the point (a,b), and suppose that f_1 and f_2 are differentiable at that point. Then f_{12}(a,b)=f_{21}(a,b).

    This theorem uses part of another theorem's proof to start things off, before getting to the parts that I can't figure out.

    Let h be a number different from zero such that the point (a+h,b+h) is inside a square having its center at (a,b). We then consider the following expression:
    D=f(a+h,b+h)-f(a+h,b)-f(a,b+h)+f(a,b)
    If we introduce the function
    \phi (x)=f(x,b+h)-f(x,b),
    we can express D in the form
    D=\phi (a+h)-\phi (a) (*)
    Now \phi has the derivative
    \phi '(x)=f_1(x,b+h)-f_1(x,b)
    Hence \phi is continuous, and we may apply the mean-value theorem for derivatives to (*), obtaining the following:
    D=h\phi '(a+\theta_1 h)=h(f_1(a+\theta_1 h,b+h)-f_1(a+\theta_1 h,b)), where 0<\theta_1 <1

    That ends the part of the separate proof; now to parts which I'm in the dark about (these are from the actual question).

    From the fact that f_1 is differentiable at (a,b), one can write
    f_1(a+\theta_1 h,b+h)=f_1(a,b)+f_{11}(a,b)\theta_1 h+f{12}(a,b)h+\epsilon_1 |h|, where \epsilon_1\rightarrow 0 as h\rightarrow 0.
    Explain why this is so.
    Repeated application of the 1 variable Taylor theorem:

    If $$ f(x) is differentiable at $$ a then:

    f(x)=f(a)+(x-a)f'(a)+ g(x-a)

    where \lim_{h\to 0} \frac{g(h)}{h}=0

    CB
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Member
    Joined
    Jan 2010
    Posts
    232
    Quote Originally Posted by CaptainBlack View Post
    Repeated application of the 1 variable Taylor theorem:

    If $$ f(x) is differentiable at $$ a then:

    f(x)=f(a)+(x-a)f'(a)+ g(x-a)

    where \lim_{h\to 0} \frac{g(h)}{h}=0

    CB
    Are you sure you mean Taylor's theorem? Checking Wikipedia, Taylor's theorem doesn't seem to relate to this.

    I'm thinking more to use something akin to this:
    u(h,k)=\frac{f(x+h,y+k)-f(x,y)-f_x(x,y)h-f_y(x,y)k}{\sqrt{h^2+k^2}}, where \epsilon_1 |h| takes the place of u(h,k), and so forth for other parts.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Grand Panjandrum
    Joined
    Nov 2005
    From
    someplace
    Posts
    14,972
    Thanks
    4
    Quote Originally Posted by Runty View Post
    Are you sure you mean Taylor's theorem? Checking Wikipedia, Taylor's theorem doesn't seem to relate to this.
    It is exactly Taylor's theorem for a once differentiable function, which informally is that for small increments on $$ a the function approximately is linear and the error decreases "faster" than the increment as the increment goes to zero.

    But please ignore these posts if you feel you have a better idea.

    CB
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Member
    Joined
    Jan 2010
    Posts
    232
    I think we're meant to use the formulations of differentiability to prove that first part. It looks to be in the same vein as this:

    f(a+h,b+k)-f(a,b)=f_1(a,b)h+f_2(a,b)k+\epsilon (|h|+|k|),
    where \epsilon\rightarrow 0 as (h,k)\rightarrow (0,0).

    We got to Taylor's theorem, but this question comes in the textbook before that point. As such, I think it's implied that we're not meant to use Taylor's theorem, even if it is a valid choice.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Member
    Joined
    Jan 2010
    Posts
    232
    Okay, I got through the first hurdle; I had to show that another Theorem followed from a prior chapter (and I'm hoping that it's a valid answer), but now comes a more difficult part.

    I'm not sure how I'm supposed to get from here
    f_1(a+\theta_1 h,b+h)=f_1(a,b)+f_{11}(a,b)\theta_1 h+f{12}(a,b)h+\epsilon_1 |h|, \epsilon_1\rightarrow 0 as h\rightarrow 0
    to here
    D=h^2 f_{12}(a,b)+\epsilon |h|h, \epsilon\rightarrow 0 as h\rightarrow 0

    The big problem is that epsilon statement at the end of it. I don't know how it gets there.

    EDIT: I should mention this; this question is more along the lines of what is found in the following link. Click here

    Unfortunately, the proof of what's in that link is NOT the correct answer for what I'm supposed to answer. It's very close, but not quite there (check the wording CAREFULLY on the theorem).

    Whereas the theory in the link has the second partial derivatives defined and continuous at a point, the theorem I'm supposed to prove does not have that statement. The theorem I'm to prove says that f_1 and f_2 are differentiable, but does not say that f_{12} and f_{21} are defined and continuous (even if they are implied).
    Last edited by Runty; November 5th 2010 at 03:03 PM. Reason: Added a link
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Member
    Joined
    Jan 2010
    Posts
    232
    Sorry for triple-post, but I'm still stuck on the part I listed earlier. I dunno if this will help, but I'm out of ideas. (note that there's a lot to read)

    This is a theorem we're meant to use to try and answer this.

    Let the function f(x,y) be defined in some neighborhood of the point (a,b). Let the partial derivatives f_1,f_2,f_{12},f_{21} also be defined in this neighborhood, and suppose that f_{12} and f_{21} are continuous at (a,b). Then f_{12}(a,b)=f_{21}(a,b).

    Note the differences between this theorem and the theorem I'm meant to prove (in the first post). The theorem above has f_{12} and f_{21} defined and continuous at (a,b), but the theorem I'm meant to prove only says that f_1 and f_2 are differentiable at (a,b). This might sound redundant, but you can clearly see there's a difference.

    (much of the following is excerpted from my textbook; this proof DOES NOT answer the question)
    To prove the theorem above, I work entirely inside a square having its center at (a,b), and lying inside the neighborhood described earlier. Let h be a number different from zero such that (a+h,b+h) is inside the square. Then consider the expression
    D=f(a+h,b+h)-f(a+h,b)-f(a,b+h)+f(a,b)
    If we introduce the function
    \phi (x)=f(x,b+h)-f(x,b),
    we can express D in the form
    D=\phi (a+h)-\phi (a). (*)
    Now \phi has the derivative
    \phi '(x)=f_1(x,b+h)-f_1(x,b).
    Hence \phi is continuous, and we may apply the mean value theorem to (*), with the result
    =h\phi '(a+\theta_1 h)=h(f_1(a+\theta_1 h,b+h)-f_1(a+\theta_1 h,b)), where 0<\theta_1<1. (**)
    Next, let
    g(y)=f_1(a+\theta_1h,y).
    The function g has the derivative
    g'(y)=f_{12}(a+\theta_1h,y).
    Now we can write (**) in the form
    =h(g(b+h)-g(b))
    and apply the mean value theorem. The result is
    D=h^2 g'(b+\theta_2h)=h^2f_{12}(a+\theta_1h,b+\theta_2h), where 0<\theta_2<1.
    Alternatively, we could have started by expressing D in the form
    D=\psi (b+h)-\psi (b), where
    \psi (y)=f(a+h,y)-f(a,y).
    This procedure would have led to the following
    D=h^2f_{21}(a+\theta_4h,b+\theta_3h), where 0<\theta_3<1, 0<\theta_4<1.
    On comparing the two expressions for D, we see that
    f_{12}(a+\theta_1h,b+\theta_2h)=f_{21}(a+\theta_4h  ,b+\theta_3h). (***)
    If we now make h\rightarrow 0, the points at which the derivatives in (***) are evaluated both approach (a,b). Hence, by the assumed continuity of f_{12} and f_{21}, we conclude that f_{12}(a,b)=f_{21}(a,b).
    (Remember, this is NOT the correct answer to the theory in the starting post)

    This is the theory and proof that correlate closely to what I'm meant to solve, but unfortunately using this proof would be an incorrect answer. Though the proofs start off similarly, they deviate at (**).

    As to this question, this is the part I'm stuck at.
    We have
    f_1(a+\theta_1 h,b+h)=f_1(a,b)+f_{11}(a,b)\theta_1 h+f{12}(a,b)h+\epsilon_1 |h|, where \epsilon_1\rightarrow 0 as h\rightarrow 0.
    I now have to explain how to obtain the following expression:
    D=h^2 f_{12} (a,b)+\epsilon |h|h, where \epsilon\rightarrow 0 as h\rightarrow 0.

    If I could just get that, I could do the same things for f_{21}, and hopefully complete the proof afterwards. As such, I'd greatly appreciate any help that could be provided.
    Follow Math Help Forum on Facebook and Google+

  8. #8
    Member
    Joined
    Jan 2010
    Posts
    232
    Again, bad interpretation has made me miss out on answering this question correctly. I have my Prof.'s answers to this, and I doubt I'll get any good mark on this. Although in all honesty, this question was ****ing ridiculous.

    Here is my Prof.'s answer:

    Since f_x is differentiable, we have:
    (*) f_x(a+h,b+k)=f_x(a,b)+hf_{xx}(a,b)+kf_{xy}(a,b)+u(  h,k)\sqrt{h^2+k^2}
    where u\rightarrow 0 as (h,k)\rightarrow (0,0).
    If we replace h by \theta_1 h and k by h, we get
    f_x(a+\theta_1 h,b+h)=f_x(a,b)+\theta_1 hf_{xx}(a,b)+hf_{xy}(a,b)+u(\theta_1 h,h)\sqrt{\theta_1^2 h^2+h^2}
    so \epsilon_1 |h|=\sqrt{\theta_1^2 h^2+h^2}u\Rightarrow \epsilon_1=\sqrt{\theta_1^2+1}u and u\rightarrow 0 as h\rightarrow 0 (as (\theta_1 h,h)\rightarrow (0,0)) so \epsilon_1\rightarrow 0.
    Alternatively, replace h by \theta_1 h and take k=0 in (*) to get
    f_x(a+\theta_1 h,b)=f_x(a,b)+\theta_1 hf_{xx}(a,b)+0f_{xy}(a,b)+u( h,0)\sqrt{\theta_1^2 h^2}
    so \epsilon_2=\theta_1 u\rightarrow 0 as h\rightarrow 0, as before.
    Substitute into
    D=h\phi '(a+\theta_1 h)=h(f_x(a+\theta_1 h,b+h)-f_x(a+\theta_1 h,b))
    to get
    D=h(hf_{xy}(a,b)+\epsilon |h|)=h^2 f_{xy} (a,b)+\epsilon |h|h, where \epsilon=\epsilon_1-\epsilon_2 and so \epsilon\rightarrow 0 as h\rightarrow 0.
    Everything is symmetric in x and y, so reversing their roles in the above gives
    D=h(hf_{yx}(a,b)+\epsilon '|h|)=h^2 f_{yx} (a,b)+\epsilon '|h|h, where \epsilon '\rightarrow 0 as h\rightarrow 0.
    Then f_{xy}(a,b)=f_{yx}(a,b)+(\epsilon '-\epsilon)|h|/h
    and |h|/h=\pm 1, \epsilon '-\epsilon\rightarrow 0 as h\rightarrow 0 so we get f_{xy}(a,b)=f_{yx}(a,b) as required.

    This question seriously was a pain in the neck, and I'd rather my Prof. give us a little more to work with next time, or at least be HELPFUL when I ask it of him.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. changing the order of integration
    Posted in the Calculus Forum
    Replies: 13
    Last Post: February 17th 2010, 06:00 PM
  2. Replies: 2
    Last Post: December 1st 2009, 01:29 PM
  3. Changing the Order of Integration
    Posted in the Calculus Forum
    Replies: 1
    Last Post: April 5th 2009, 04:06 PM
  4. changing the order of integrals
    Posted in the Calculus Forum
    Replies: 2
    Last Post: February 14th 2009, 06:47 AM
  5. Replies: 1
    Last Post: December 27th 2006, 07:31 AM

Search Tags


/mathhelpforum @mathhelpforum