Please disregard. I hit submit instead of preview. I'll resubmit when complete.

Results 1 to 15 of 18

- Jun 20th 2014, 07:11 AM #1
## What is this problem about?

- Jun 20th 2014, 07:44 AM #2

- Joined
- Feb 2014
- From
- United States
- Posts
- 1,571
- Thanks
- 733

- Jun 20th 2014, 07:46 AM #3

- Joined
- Mar 2011
- From
- Tejas
- Posts
- 3,546
- Thanks
- 842

- Jun 20th 2014, 07:56 AM #4

- Joined
- Feb 2014
- From
- United States
- Posts
- 1,571
- Thanks
- 733

## Re: What is this problem about?

This problem is about the basic idea behind differential calculus.

I shall not try to explain standard analysis, but $\dfrac{f(x + h) - f(x)}{h}$ is called a difference quotient, which is a type of function.

Let's (temporarily) call $\dfrac{f(x + h) - f(x)}{h} = g(x,\ h).$ In terms of its strict definition can I EVER find g(x, h) if h = 0?

Let's take two examples.

Example 1. $f(x) = 3x - 5.$

$\therefore f(x + h) = 3(x + h) - 5 = 3x + 3h - 5.$

$\therefore f(x + h) - f(x) = 3x + 3h - 5 - (3x - 5) = 3x + 3h - 5 - 3x + 5 = 3h.$

$\therefore g(x,\ h) = \dfrac{f(x + h) - f(x)}{h} = \dfrac{3h}{h} = 3.$ Does it look as though I can plug h = 0 into the simplified version of g(x, h)?

Example 2 (a bit harder): $f(x) = \dfrac{1}{x}.$

$\therefore f(x + h) = \dfrac{1}{x + h}.$

$\therefore f(x + h) - f(x) = \dfrac{1}{x + h} - \dfrac{1}{x} = \dfrac{x}{x(x + h)} - \dfrac{x + h}{x(x + h)} = \dfrac{x - (x + h)}{x(x + h)} = \dfrac{x - x - h}{x^2 + hx} = -\ \dfrac{h}{x^2 + hx}.$

$\therefore g(x,\ h) = \dfrac{f(x + h) - f(x)}{h} = \dfrac{-\ \dfrac{h}{x^2 + hx}}{h} = \left(-\ \dfrac{h}{x^2 + hx}\right) * \dfrac{1}{h} = -\ \dfrac{1}{x^2 + hx}.$

Does it look as though I can plug h = 0 into the simplified version of g(x, h)?

- Jun 20th 2014, 08:40 AM #5
## Re: What is this problem about?

**JeffM**, hey, yes, I think I've come across something similar while skimming through the idea of a limit, where sometimes the expression has to be changed in order for the limit not to be undefined. Although the idea that two equivalentely legal expressions can give different answers is a bit mind-boggling.

But yes, I think this should have something to do with the present problem in one way or another.

Here's that limit thing:

Thanks a lot, though . I'm guessing I won't need it for now and will only understand its use once I get to calculus?

**Deveno**, but I want to understand what's it all about, not just solve the expressions from reference, that shouldn't add to my understanding of the subject.

- Jun 20th 2014, 09:40 AM #6

- Joined
- Feb 2014
- From
- United States
- Posts
- 1,571
- Thanks
- 733

## Re: What is this problem about?

Well, I was trying to avoid the subtleties of analysis because, frankly, I am very dubious that starting with something that perplexed the world's best mathematicians for about 150 years is really the best way to teach calculus to beginners. (My opinion is, apparently, pedagogical heresy and shared by virtually no one except first semester calculus students.)

Let's take your example above. And here is my very informal and very non-rigorous explanation.

$x \ne 0 \implies f(x) = \dfrac{x^3 + 2x^2}{x^2}.$ I have to define that function with the limitation that x is not zero.

$h(x) = x + 2.$

Now I can't strictly say $f(x) = h(x)$ because f(x) is not defined at x = 0. I can only say (being very strict) $x \ne 0 \implies f(x) = h(x).$

So f(x) and h(x) are not logically equivalent strictly speaking, and so it is perfectly understandable that they give different results at x = 0.

What I can also say is that $f(x) \approx 2\ if\ x \approx 0\ but\ x \ne 0.$

That is, for most practical purposes, I can say f(x) = h(x), but it is never correct in the strictest sense because I defined f(x) and h(x) differently. In the 17th and 18th century, the mathematicians in essence accepted calculus because it gives the right answers to problems in physics to the limits of our ability to measure. So they ignored the logical difficulty.

Analysis is the 19th and 20th centuries's justification of the 17th and 18th centuries's practice of calculus. And the 19th century's first contribution to that justification was the idea of limit.

The basic idea in differential calculus is the derivative. Here is its standard modern definition.

$\displaystyle \lim_{h \rightarrow 0}\dfrac{f(x + h) - f(x)}{h} = f'(x).$ The derivative is itself a function related to the parent function.

Why do we care what the derivative of a function is? If it exists, meaning the function is differentiable, the derivative gives the correct formula for the slope of that function. In my first example in my previous post, notice that we got the correct slope of what is a linear function without any limit being involved.

Why do we care about slopes? Local minima and local maxima of differentiable functions have slopes equal to zero. The main practical application of differential calculus is to find out at what values of x a function reaches its maximum or minimum values, which in turn lets us calculate what those maxima and minima are.

- Jun 20th 2014, 09:41 AM #7

- Joined
- Mar 2011
- From
- Tejas
- Posts
- 3,546
- Thanks
- 842

## Re: What is this problem about?

Ok, let's say we have this function, $f$. As $x$ changes, so does $f(x)$. If we want to know the AVERAGE rate of change between $x$ and "a little more (or less)" we add $h$ to $x$, that gives us two points on the graph of $f$:

$(x,f(x))$ and $(x+h,f(x+h))$, and the average rate of change is the slope of the line between those two points, which is:

$\dfrac{f(x+h) - f(x)}{(x+h) - x} = \dfrac{f(x+h) - f(x)}{h}$.

This gives us the best "two-point linear fit" (approximation by a straight line) over the interval $(x,x+h)$ (or $(x+h,x)$ if $h$ is negative, which could be).

However, we might want to know the "instantaneous rate of change" and now we have a problem: if $h = 0$, we get:

$\dfrac{f(x+0) - f(x)}{0} = \dfrac{0}{0}$, and we have no idea what that even means.

So the basic idea is: we let $h$ get "really, really small", and see if the difference quotient (the slope of the line mentioned earlier) tends to "settle down" to a fixed number, called a LIMIT.

Now, this might seem like cheating, but consider something like:

$\dfrac{x}{x}$.

It's pretty clear that everywhere BUT $x = 0$, this ratio is equal to 1. So if we leave a hole at $0$, and define:

$f(x) = \dfrac{x}{x}, x \neq 0$

$f(0) = 1$

hey! Problem solved!

Now the exact definition of a limit is kind of ugly, with deltas and epsilons, and lots of quantifiers and such, but the basic idea is this:

We say $\displaystyle \lim_{x \to a} f(x) = L$ if when $x$ is "near" $a$ (but not equal to it, because maybe we can't go there!), $f(x)$ is "near" $L$. The math-ese in the formal definition is to make precise: "what qualifies as 'near'?".

In particular, we say that if:

$\dfrac{f(x+h) - f(x)}{h}$ tends to SOME specific number as $h$ approaches 0, then that number is the DERIVATIVE of $f$ at $x$, written $f'(x)$. We also call it the SLOPE of $f$ at $x$, because it represents the slope of a tangent (from the Latin for "touching") line to $f$ at the point $x$.

Sometimes $h$ is written as $\Delta x$ (change in $x$), and the quantity $f(x+h) - f(x)$ is written as $\Delta y$ (change in $y$), and one may see the notation:

$\displaystyle \dfrac{dy}{dx} = \lim_{\Delta x \to 0} \dfrac{\Delta y}{\Delta x}$

****************************

tl,dr version: when you study derivatives, you have to find limits of expressions of the form:

$\dfrac{f(x+h) - f(x)}{h}$

and these limits are easier to find, AFTER you've simplified this (ESPECIALLY if the "$h$'s cancel"). So you're being given the chance to practice this ahead of time.

- Jun 20th 2014, 12:32 PM #8
## Re: What is this problem about?

My opinion is, apparently, pedagogical heresy and shared by virtually no one except first semester calculus students.

But yes, great-great intro to calculus, guys. Thank you. Very tangible and I think I've been able to form a basic understanding (although that is sometimes a deceptive thought , but we'll see).

So, in other words we take an infinitesimally small straight line which is tangent to a graph of some non-linear function and measure its slope; we then can use it for problem solving. And although the slope isn't exactly defined, for practical purposes it doesn't really matter, just as it doesn't matter that we can't use the exact value of pi. Am I correct?

- Jun 20th 2014, 01:05 PM #9

- Joined
- Feb 2014
- From
- United States
- Posts
- 1,571
- Thanks
- 733

## Re: What is this problem about?

That's the intuition of it.

I think, however, that most mathematicians (especially Platonists) would be very upset with the intuition's implication that calculus is just close to correct. What persuades me that they are right is this: if no matter how well I measure, I can't find the error, what justification do I have for even saying there is an error. Calculus works. How you justify that logically is where different people take different approaches.

- Jun 20th 2014, 01:14 PM #10

- Jun 20th 2014, 08:17 PM #11

- Joined
- Mar 2011
- From
- Tejas
- Posts
- 3,546
- Thanks
- 842

## Re: What is this problem about?

Here is how I think of it: numbers were made to measure things. There is a problem of "accuracy of measurement" which is not "perfectly solvable" (at some perhaps microscopic level, there is possibility of error).

Limits, and the number systems they reside in, represent a kind of "idealization" of what perfect measurements would be IF we could make them.

It turns out that the uncertainty can usually be (for simple enough relationships) "managed": made so small we are "close enough". In this sense, the real numbers replace the "exactness" and precision of the fractions (essentially based on comparison of COUNTED quantities), with a kind of "liquidity" or "fuzziness" of an approximation. For example, we don't know "exactly" what $\pi$ is, but we know it's less than 5/1000 away from 3.141, that is it lies between:

3141/1000 and 3142/1000

(so if our "ruler" had 1/1000-th inch markings, a reading of 3141 would be "as close as we could get" to measuring $\pi$).

The underlying idea to this, is that functions are "idealized relationships" between quantities we suppose are actually TRUE (this is our "working theory"), and if our measurements of data support this, we feel justified in using our theory as a predictive model.

Now, mathematics itself, requires no "real-world" justification, it only has to be LOGICALLY consistent. Thus we are free to invent whatever "number systems" we like, to try to aid us in understanding of this curious world we live in. Remarkably, this often helps.

- Jul 28th 2014, 09:25 PM #12
## Re: What is this problem about?

Okay, I had to think one more time about what this all means.

So, we are allowed to do all these operations because we find this bypass for not being able to divide by zero? That is, it even looks from the graph that we pretty much still approach the same number (value for slope) even if we have 0 in the denominator, right? So it's almost like this do-not-divide-by-zero rule is wrong in this case and so we have to introduce an exception to the rule. If not, why are we allowed to use this fix then?

- Jul 29th 2014, 11:07 AM #13

- Joined
- Feb 2014
- From
- United States
- Posts
- 1,571
- Thanks
- 733

## Re: What is this problem about?

You are taking an intuitive approach. Personally, I would prefer to say that the limit approach gives a logically consistent justification for the informal intuition that 0 / 0 is not always undefined. A major value of the limit approach is that gives a method for determining when that intuition is correct.

I am not a mathematician so others may have wildly different views. I suggest you reread deveno's posts.

- Jul 29th 2014, 08:30 PM #14
## Re: What is this problem about?

Now, this might seem like cheating, but consider something like:

$\dfrac{x}{x}$.

It's pretty clear that everywhere BUT $x = 0$, this ratio is equal to 1. So if we leave a hole at $0$, and define:

$f(x) = \dfrac{x}{x}, x \neq 0$

$f(0) = 1$

hey! Problem solved!

Or maybe not an exception, just this new concept of limit, as you said, by which we see that if we were able to divide by zero we would expect to see the same answer here, as when we divide by something merely close to zero.

I think I understood your point.

- Jul 30th 2014, 05:39 AM #15

- Joined
- Feb 2014
- From
- United States
- Posts
- 1,571
- Thanks
- 733

## Re: What is this problem about?

Max

You are thinking sensibly. But you are missing something. Yes, in many cases that you are seeing, you could eliminate the need for limits simply by redefining the function at a particular point or points so 0 / 0 does not occur. But in calculus the computation behind the derivative is 0 / 0 everywhere so redefinition at a finite number of points won't work. If limits are taught before calculus, they seem unnecessary. That is not how it proceeded historically. Calculus came first. Limits came in to provide a more secure logical basis for calculus.

Morris Klein used to argue that math should be taught in the same order as it developed historically because the historic development indicates the most intuitive approach. That argument has always seemed very strong to me.