Well there are continuous functions that nowhere differentiable.
Differentiablity a powerful concept. It a smoothing notion.
For the mean value theorem we need both continuous & differentiable functions.
Just for fun, I decided I'd try to prove that if a function is continuous, then . I haven't cheated by looking for a proof on the web.
Anyway, my idea is to make use of the Mean Value Theorem, namely for some . It therefore follows that
Now, in order to prove , we need to show that .
Now by the Mean Value Theorem we have
So we can let .
Proof: Let and , where . Then
Q. E. D.
My only worry is the application of the Mean Value Theorem assumes evaluating derivatives, which assumes evaluating limits. Would we have already needed to prove the result that if a function is continuous at a point then the limit of that function is equal to the function value at that point in order to make use of the Mean Value Theorem?
the mean value theorem assumes a STRONGER condition on f than just continuity, namely that of differentiability. you also need continuity on the "larger" interval [x,α], or the mean value theorem may be FALSE, for example:
let f(x) = x on [0,1)
f(1) = -1.
then (f(1) - f(0))/(1-0) = -1, but even though f(x) is differentiable on (0,1), there is no c in (0,1) where f'(c) = -1.
in fact, one can prove that differentiability implies continuity, which puts a high degree of circularity in your proof.
my question is: what do you mean by "continuous" if not:
And yes I know that the Mean Value Theorem involves derivatives and it may seem circular to use it in a proof if done before derivatives. It was the only way I could think of :P
You also forgot that to be continuous, not only does , but also that the function is DEFINED at that point and happens to be .
the way i like to think of it is:
"f is continuous at a if: x near a, implies f(x) near f(a)".
the whole machinery of epsilon-delta is, in its essence, just a fancy way of specifying what we mean by "near". the general abstract characteristic of continuous is:
"f is continuous from X to Y if and only if for every open set U of Y, the pre-image f-1(U) is open in X"
this lets us talk about neighborhoods, instead of "measuring distances", which is a more flexible formulation. the trouble then becomes: "what is an "open" set?"
in the real numbers, an INTERVAL (a,b) is a very special kind of thing. topologically, there is no difference between such an interval, and all of the real numbers (the real line is just "an infinitely stretched open interval"). so one of the reasons why "continuity" is so meaningful on the real numbers, is that the real numbers are a continuum.
so, in a sense, the most logical way to build up the notion of continuity for functions is to start with sequences (just as we build irrationals from sequences of rationals to make "continual numbers" (real numbers), we build "continuous functions" from sequences):
f is continuous on X to Y if:
whenever (xn) converges to x, (f(xn)) converges to f(x).
(NOTE: this only holds in some spaces...but metric spaces (such as euclidean n-space, with the usual metric induced by the "dot product") are among them).
of course, here we are back to another rub: what is "convergence"?
my point is: sooner or later, one way or another...you're in for some "hard stuff". why? because the real numbers are *complicated*. and that's the part that is usually "glossed over" in a first run at calculus: just what ARE these real number things. take for example, the statement:
pi is a real number. what does that mean? often, some "hand-waving" about how the real numbers form a LINE, is presented. and the complexity of what it means "to be a point on a line" is neatly side-stepped.
"infinite decimals" are a somewhat more satisfactory attempt at explanation. but "infinity" is fraught with peril..not all "infinities" are the "same", and explaining WHY this is so, is perhaps even more difficult than "epsilon-delta". this trick, here, is to characterize "infinity" without even mentioning it (which is what sequences actually DO for us).
however, the traditional approach is to use a "soft" definition of limit, and then "tighten it up" later. i feel this is bad mathematics. it is far better, in my opinion, to teach people about what something like:
|x-a| < δ means.
in this case, we are using a small number (δ) to define "nearness" of x to a.
one might ask: why do we focus on "epsilon" first? well, functions need not be 1-1 (injective). so x-values "far apart" might yield f(x)-values "close together". constant functions represent a sort of "worst-case" scenario: they are obviously continuous, but no choice of delta tells us anything. so we look at a "target range" for f(x) first, to establish a proper "domain range" for x.
the strange thing about this is that vertical lines aren't "continuous" (because they're not FUNCTIONS). there are ways around this, but that's way beyond first-year calculus.
What has been posted has been very informative and thought provoking. While I don't believe there is anything wrong with my algebra, the fault is that I have used the Mean Value Theorem without mentioning differentiability.
What I should have said is "Assume that a function is differentiable at all points , then by the Mean Value Theorem (proof omitted) this result holds: where ."
I think what I actually did was to show that differentiability implies continuity... However, I have said nothing about what makes a function differentiable (yet).
Is there a way to define differentiability without relying on continuity arguments?
The main point is not to complicate things. The relationship between limits and continuity is trivial and holds by definition. Differentiability is a completely different story.
suppose we know that:
note i have said NOTHING about whether or not f is continuous at a (although the assumption is tacit that f(a) exists).
this is the same as saying: for every ε > 0 there is δ > 0 such that:
note that this, in turn, means (for such x within δ of a):
(consider L ≥ 0, and L < 0 separately).
now consider . we have, for |x - a| < δ':
which shows .
assuming one has already proved that the limit of a product is the product of limits and that the limit of a sum is the sum of the limits, there is an even easier proof: