# solving for delta, formal limit definition

• Jan 28th 2009, 10:34 AM
edudlive
solving for delta, formal limit definition
We started on this in class today and my professor gave no more than a quick glance at how to solve and possible questions that would arise when trying to complete the homework.

When graphed and explained the proof made perfect sense. It is when I try to complete the homework that I run into problems. The instructions for the problem read:

Quote:

Numerically and graphically determine a delta corresponding to (a) epsilon = 0.1 and (b) epsilon = 0.05. Graph the function in the epsilon - delta window [xrange is (a-delta, a+delta) and yrange is (L-epsilon, L+epsilon)] to verify that your choice works.
1) lim of (x^2 + 1) as x approaches 0 (the value of L is given in these problems, and is of course 1 in this case)

The only way in which the professor showed us how to solve the problems is such:

the proof states that |F(x) - L| < epsilon

so my problem is such:

|[x^2 + 1] - 1| < 0.1

We break down the absolute value down into:

-0.1 < (x^2 + 1) - 1 < 0.1

which ends up looking like this:

-0.1 < X^2 < 0.1

Basically from there he jumped to solving X^2 = 0.1 to get x = (0.1)^1/2, which is the correct answer. Now for all problems in which one of the sides of the inequality is outside the range of the function (ex. -1.1 as a value for cosx, or -0.1 as a value for x^2), this gives me the correct answer...however it is obvious to me that I'm missing a piece of the puzzle.

The second problem on the homework is this:

lim of (x^2 + 1) as x approaches 2, L=5 epsilon=0.1

i start with |(x^2 + 1) - 5|<0.1 which breaks down into

3.9 < X^2 < 4.1, now where do I go from here to find a correct delta?

I've tried reading the book (which is useless), my companion ("The Calculus Lifesaver"), and the internet and none of the resources seem to explain to me how to solve the problem...at least in a fashion that I can grasp.
• Jan 28th 2009, 10:59 AM
Jester
Quote:

Originally Posted by edudlive
We started on this in class today and my professor gave no more than a quick glance at how to solve and possible questions that would arise when trying to complete the homework.

When graphed and explained the proof made perfect sense. It is when I try to complete the homework that I run into problems. The instructions for the problem read:

1) lim of (x^2 + 1) as x approaches 0 (the value of L is given in these problems, and is of course 1 in this case)

The only way in which the professor showed us how to solve the problems is such:

the proof states that |F(x) - L| < epsilon

so my problem is such:

|[x^2 + 1] - 1| < 0.1

We break down the absolute value down into:

-0.1 < (x^2 + 1) - 1 < 0.1

which ends up looking like this:

-0.1 < X^2 < 0.1

Basically from there he jumped to solving X^2 = 0.1 to get x = (0.1)^1/2, which is the correct answer. Now for all problems in which one of the sides of the inequality is outside the range of the function (ex. -1.1 as a value for cosx, or -0.1 as a value for x^2), this gives me the correct answer...however it is obvious to me that I'm missing a piece of the puzzle.

The second problem on the homework is this:

lim of (x^2 + 1) as x approaches 2, L=5 epsilon=0.1

i start with |(x^2 + 1) - 5|<0.1 which breaks down into

3.9 < X^2 < 4.1, now where do I go from here to find a correct delta?

I've tried reading the book (which is useless), my companion ("The Calculus Lifesaver"), and the internet and none of the resources seem to explain to me how to solve the problem...at least in a fashion that I can grasp.

Take the square root of the inequality

$1.9748 < x < 2.0248$

so

$-.0251< x - 2 < 0.0248$ and for $| x - 2 | < \delta$

so $\delta = \min\{.0251,.0248\}$
• Jan 28th 2009, 11:26 AM
edudlive
Why does the 2 come back into the equation? Because originally it gives you the points around the value x is approaching (2 in this case) and you need to find the distance from the value a and not the actual x values? So that I end up with a range of correct deltas (0.0248 to 0.0251) in this case?

I'll post another problem and maybe I'm doing it correctly.

limit of (x+3)^1/2 as x approaches 1, epsilon=0.1

|(x+3)^1/2 - 2| < 0.1

-0.1 < (x+3)^1/2 - 2 < 0.1

1.9 < (x+3)^1/2 < 2.1

(1.9)^2 < (x+3) < (2.1)^2

(1.9)^2 - 3 < x < (2.1)^2 - 3

0.61 < X < 1.41

0.61 < x-1 < 1.41

|0.61 - 1| = 0.39 and |1.41-1| = 0.41

The book only gives 0.39 as the answer, but if I understand correctly both are correct and the book just chooses the smaller delta of the two?
• Jan 28th 2009, 11:39 AM
Jester
Quote:

Originally Posted by edudlive
Why does the 2 come back into the equation? Because originally it gives you the points around the value x is approaching (2 in this case) and you need to find the distance from the value a and not the actual x values? So that I end up with a range of correct deltas (0.0248 to 0.0251) in this case?

I'll post another problem and maybe I'm doing it correctly.

limit of (x+3)^1/2 as x approaches 1, epsilon=0.1

|(x+3)^1/2 - 2| < 0.1

-0.1 < (x+3)^1/2 - 2 < 0.1

1.9 < (x+3)^1/2 < 2.1

(1.9)^2 < (x+3) < (2.1)^2

(1.9)^2 - 3 < x < (2.1)^2 - 3

0.61 < X < 1.41

0.61 < x-1 < 1.41

|0.61 - 1| = 0.39 and |1.41-1| = 0.41

The book only gives 0.39 as the answer, but if I understand correctly both are correct and the book just chooses the smaller delta of the two?

Actually, both don't work. Only the smaller of the two. Let's consider both cases.

Case 1: $\delta = 0.39$

Here the interval is $[.61, 1.39]$. Substituting the endpoints into the function gives

$\sqrt{.61 + 3} = 1.900,\;\;\;\sqrt{1.39 + 3} = 2.0952,$

Here, both are within the given L, (L = 0.1)

Case 2: $\delta = 0.41$

Here the interval is $[.59, 1.41]$. Substituting the endpoints into the function gives

$\sqrt{.59+ 3} = 1.8947,\;\;\;\sqrt{1.41 + 3} = 2.1,$

Here, both are not within the given L (L=0.1). The first one is outside.

See the difference?
• Jan 28th 2009, 11:45 AM
edudlive
I do, makes much more sense now. Thank you!
• Jan 28th 2009, 12:28 PM
edudlive
one final question. In the event that I have an inequality such as
0.9 < cosx < 1.1

or

-0.1 < x2 < 0.1

Is there any special way to show why I disregard part of the inequality (the square root of a negative number, or a value outside the range of cosine) or can it generally just be assumed the person will understand why? This seems like an instructor specific question but I prefer to know any and all steps so that I can show all of my work when possible...and because I'll be the one explaining how to work the homework to the class before the professor shows up in the morning (Happy)
• Jan 28th 2009, 01:28 PM
Jester
Quote:

Originally Posted by edudlive
one final question. In the event that I have an inequality such as
0.9 < cosx < 1.1

or

-0.1 < x2 < 0.1

Is there any special way to show why I disregard part of the inequality (the square root of a negative number, or a value outside the range of cosine) or can it generally just be assumed the person will understand why? This seems like an instructor specific question but I prefer to know any and all steps so that I can show all of my work when possible...and because I'll be the one explaining how to work the homework to the class before the professor shows up in the morning (Happy)

You might have some problems with those examples because there are no x values such that

$\cos x = 1.1$ or $x^2 = - 0.1$

I'd stick with problems where the range of the function is defined.