# Thread: And still yet another epsilon-delta proof?

1. ## And still yet another epsilon-delta proof?

I know I've posted quite a few questions about epsilon-delta proofs in the past few days, but I really do appreciate all the help Math Help Forum has given me so far, I think I'm getting better at it. Now, the following problem, I beleive I've come close to solving. But, as always, I really need math forum to help verify the valididty of the proof. Heres the problem exactly:

Prove that if:

$\lim_{x \to a} f(x) = L$

and

$\lim_{x \to a} g(x) = M$

then

$\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" alt="\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" />

Now, I will post the work I've come up with so far, and hopefully any mistakes or major errors will be found by the every diligent members of Math Help Forum. But, my supposed proof uses a property of limits I've proved earlier in this textbook, so I'll place it immediately below and call it "property [A]":

PROPERTY [A]

If, for all $x$, the following holds:

$f(x) \leq g(x)$

then it follows that:

$\lim_{x \to a} f(x) \leq \lim_{x \to a} g(x)$

given that these limits exist.
Now, back to the main problem, heres the work I've come up with so far. Lets look at seperate cases that need to be adressed:

CASE 1:

Suppose that:

$f(x) < g(x)$

for all $x$. Then, it follows from [A] that:

$L < M$

and from the above we get:

$\lim_{x \to a} max( \; f(x), \; g(x) \; ) = \lim_{x \to a} g(x) = M = max(\; L, \; M \" alt="\lim_{x \to a} max( \; f(x), \; g(x) \; ) = \lim_{x \to a} g(x) = M = max(\; L, \; M \" />

CASE 2:

Suppose that:

$g(x) < f(x)$

then, by [A] it follows simmilarly (like Case 1) that:

$M < L$

and from the above we get:

$\lim_{x \to a} max( \; f(x), \; g(x) \; ) = \lim_{x \to a} f(x) = L = max(\; L, \; M \" alt="\lim_{x \to a} max( \; f(x), \; g(x) \; ) = \lim_{x \to a} f(x) = L = max(\; L, \; M \" />
Now, I understand that there is a difference between $a < b$ and $a \leq b$, so I am not sure if the little difference I made invalidates the use of property [A]. I figured it only did not use it in full strength, since the way I stated it did not allow for the possiblity $f(x) = g(x)$, so that must be a thrid Case, correct? I'm not sure how to prove Case 3??? This is where I need some help. Its mostly do to the fact that I am confused as to what the following expression would mean, intuitvely and in terms of the proof:

$max(\; a, \; b \" alt="max(\; a, \; b \" />

if

$a=b$

Also, I don't know if the above methodology towards a proof is valid either. To be completely rigourous, is it neccesarry to deconstuct this problem into the language of the definition of a limit? (i.e. epsilon and delta inequalities?)

Thanks in advance for any help.

2. The third case in your problem is trivial. Do you not feel like you have made a massive assumption somewhere ?

3. Originally Posted by bobak
The third case in your problem is trivial. Do you not feel like you have made a massive assumption somewhere ?
I get the feeling of an error caused by assumption yes. But I have a hunch that it has to do with the difference between

$a < b$

and

$a \leq b$

and the implications that that has for the "Property [A]", and its use in my proof. Maybe I'm wrong. Maybe this isn't the massive assumption your talking about. In any case, can you please point out the massive assumption I've made, so that I can understand my mistake and better understand the correct proof?

4. You have assumed that you have dichotomy between your functions f and g. For example take f(x) = x and g(x) = x^2, for certain values of f > g and others g > f, so you see that you cannot assume that for any two function that one is always greater than the other.

Can you quickly tell me the formula definition of $\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" alt="\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" /> ?

5. Originally Posted by bobak
You have assumed that you have dichotomy between your functions f and g. For example take f(x) = x and g(x) = x^2, for certain values of f > g and others g > f, so you see that you cannot assume that for any two function that one is always greater than the other.

Can you quickly tell me the formula definition of $\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" alt="\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" /> ?
Your completely correct about the function thing! Man, what an oversight I my part. I apologize, I'm new to the real detailed stuff in terms of functions. I just got this introduction to analysis text book a month ago, and don't have any classes to help out till fall. My ignorance is obvious in the incorrect assumption I made. Again, I'm sorry. That assumption seems obvious now.

$\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" alt="\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" /> ?[/QUOTE]

I'm not exactly sure what it is your looking for. Is this it:

$max(\; L, \; M \ = \frac{L + M + |L-M|}{2}" alt="max(\; L, \; M \ = \frac{L + M + |L-M|}{2}" />

???

Or do you mean something like this:

"The following line,

$\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" alt="\lim_{x \to a} max( \; f(x), \; g(x) \ = max(\; L, \; M\" />

means that:

$[(\forall \delta > 0) \; \to \; (\exists \epsilon > 0) : (\forall x)$

$0 < |x-a| < \delta \Rightarrow |\frac{f(x)+g(x)+|f(x)-g(x)|}{2} - \frac{L + M + |L - M|}{2} | < \epsilon$

"

I don't know which one you mean.

6. I was talking about the latter and you have got the definition wrong, Listen carefully:

Informally when we says a function tends to a finite limit at a point what we mean is that we can get that function as close as we want to the limit in some region around that point.

Formally we show limits exists as follows, I give you any non-zero positive number $\epsilon$ and you have to tell me how close I need to be to the point so that the function is within $\epsilon$ of the limit.

So I give you epsilon and you find delta. notice that delta is a function of epsilon.

Make sure you understand this definition, I've looked at your other forum posts and this question is not much different to some of the other you had posted, try and solve it by simply using the definition.

7. Originally Posted by bobak
I was talking about the latter and you have got the definition wrong, Listen carefully:

Informally when we says a function tends to a finite limit at a point what we mean is that we can get that function as close as we want to the limit in some region around that point.

Formally we show limits exists as follows, I give you any non-zero positive number $\epsilon$ and you have to tell me how close I need to be to the point so that the function is within $\epsilon$ of the limit.

So I give you epsilon and you find delta. notice that delta is a function of epsilon.

Make sure you understand this definition, I've looked at your other forum posts and this question is not much different to some of the other you had posted, try and solve it by simply using the definition.
I'll be much more careful in the future, in terms of dealing with the exact definition of a limit. But I have a question, where did the error occur in my definition of the limit for this problem? I do not hold a single doubt as to the fact that there indeed exists an error (since I'm new to epsilon-delta proofs) in my definition; I trust your mathematical judgement much more then mine. It's not a question of "How does that definition contain any errors to you?", its more so "Can you point out the errors in my definition of a limit for the problem, as you asked me to provide, so that I won't make the same mistake in the future?"

Thanks.

8. Originally Posted by mfetch22
I'll be much more careful in the future, in terms of dealing with the exact definition of a limit. But I have a question, where did the error occur in my definition of the limit for this problem? I do not hold a single doubt as to the fact that there indeed exists an error (since I'm new to epsilon-delta proofs) in my definition; I trust your mathematical judgement much more then mine. It's not a question of "How does that definition contain any errors to you?", its more so "Can you point out the errors in my definition of a limit for the problem, as you asked me to provide, so that I won't make the same mistake in the future?"

Thanks.
you wrote

$\forall \delta > 0 \ \ \ \exists \epsilon > 0 \ \ s.t \ \ |x-a| < \delta \ \Rightarrow |f(x) - L| < \epsilon$
So I shall use this definition to prove that for the function $f(x) = 0$ tends to a limit of 4 as x tends to 1.

So for all $\delta$ I shall pick $\epsilon = 5$, now is it definitely true that

$|x-1| < \delta \Rightarrow |f(x) - 4| < 5$

as $f(x) = 0$ and it is always true that $|4| < 5$

what you meant to write is:

$\forall \epsilon> 0 \ \ \ \exists \delta > 0 \ \ s.t \ \ |x-a| < \delta \ \Rightarrow |f(x) - L| < \epsilon$
with this definition I cannot prove nonsense because we quickly find a "bad epsilon", For the example above we pick $\epsilon = 2$ then can we find a $\delta$ such that $|x-1| < \delta \ \Rightarrow |4| < 2$. Of course we cannot because the conclusion is always false (and does not depend on x).

let me know if there is anything you don't understand.

9. Originally Posted by bobak
you wrote

So I shall use this definition to prove that for the function $f(x) = 0$ tends to a limit of 4 as x tends to 1.
So for all $\delta$ I shall pick $\epsilon = 5$, now is it definitely true that
$|x-1| < \delta \Rightarrow |f(x) - 4| < 5$
as $f(x) = 0$ and it is always true that $|4| < 5$

what you meant to write is:

with this definition I cannot prove nonsense because we quickly find a "bad epsilon", For the example above we pick $\epsilon = 2$ then can we find a $\delta$ such that $|x-1| < \delta \ \Rightarrow |4| < 2$. Of course we cannot because the conclusion is always false (and does not depend on x).

let me know if there is anything you don't understand.
Thanks, I see it now. I'm still alittle confused about the exact definition of a limit. It just seems very vague in its description and usage, specifically in the freedoms allowed in the proofs involving the definition. But, I'm 100% certain that to an educated mathemetician, the definition of a limit is rigourously and sufficiently precise. But, it's still confusing me. Heres the two things that are mainly causing my confusion:

[1] In many proofs I've seen using the epsilon-delta definition of a limit, I often run into seemingly random stipulations. Further in the proof, it becomes obvious why these stipulations are useful (mainly it seems they are used to determine upper bounds of expressions), but it's not it's use that confuses me. Its the ability to make the stipulations at all. Consider this example (don't expect me to be thuroughly rigourus below, I'm simply giving an example) say I want to prove that:

$\lim_{x \to 3} [x^2 - 9 ] = 0$

So I start by looking at

$0<|x-3| < \delta$

so I know I want to find that expression, then we want to show:

$|(x^2 - 9)-0| = |x^2-9| = |x-3| \cdot |x+3| < \epsilon$

so, I want to find an upper bound on the expression

$|x+3|$

Heres the part that I find to be very uncomfortable. Now, apparently I'm allowed to just magically make a stipulation about the value of $|x- 3|$. To me, this is very uncomfortable territory. I mean, $x$ is supposed to stand for all the $x$ values of the function, so how in the world can somebody (in terms of "mathematically allowed" operations) how can somebody just decide they desire that a certain expression be less then some convinient value!? This just seems contrary to the "laws" of algebra!? I'm sure that I'm completely wrong. But, don't you agreee that this ability to simply decide any particular bound on an expression is something very peculiar seeming? Although, I did realize in the proofs in my textbook, picking some desired stipulation on an expression was only ever done to the expressions of the form $|x-a|$, refering to the expression in the inequality of the form:

$0<|x-a|< \delta$

Maybe this is just a coinicidence as to the authors choice of which expressions he was "allowed" to make any stipulation desired from, or maybe theres a mathematical reason behind his choice of which expression he was "allowed" to make the stipulation of an upper bound, upond the expression that is, but who knows?? Do you know wheather this is a coinicidence or not? Anyway, I think I've made the source of my confusion clear. Back to the problem:

Say I make this stipulation below:

$|x-3| < 1$

then it follows that:

$-1 < x - 3 < 1$

and

$2 < x < 4$

and finnally

$5 < x + 3 < 7$

Therefore we know that

$|x + 3| = x+3 < 7$

and we can assert that:

$|x^2-9|< 7 \cdot |x-3|$

So if we set $\delta = \frac{\epsilon}{7}$ we can go from:

$0 < |x-3| < \delta$

and arrive at:

$7 \cdot |x-3| < 7 \delta$

or

$7 \cdot |x-3| < 7(\epsilon / 7)$

which becomes

$7 \cdot |x-3| < \epsilon$

but we know that:

$|x^2 - 9 | < 7 \cdot |x-3|$

therefore we arrive at:

$|x^2 - 9| < 7 \cdot |x-3| < \epsilon$

so

$|x^2-9| < \epsilon$

So, obviously, making the stipulation that $|x-3| < 1$ does become handy, maybe even invaluble, in the process of proving that:

$0 < |x-3| < \delta \;\;\; \Rightarrow \;\;\; |(x^2-9)-0|< \epsilon$

Its still uncomfortable, the seemingly laxidazical approach to the allowence of such apparently arbitrary stipulations feels allot like a hole in the logic. Granted, I know this is not so, and it is rather a mis-understanding within my own confused mind. Regardless, I have one other source of confusion about the epsilon delta limit definitions and proofs, and I would like some assitance.

[2] Secondly, since we can make these random stipulations however we may feel, and since we are only required to find some $\delta$ > 0, whos' to say we can't find some numbers $M$ and $L$ such that, for this supposed clever choice of $\delta$, whos to say we can't find these numbers to be two seperate numbers for the same limit??? I mean, this whole ablility to decide "what is less then what"; in however manner may be the most convinient at the time, seems to make no sense. How do we know for sure that, using this seemingly vague definition of a limit, we can't find some strange results? What is stopping us from stumbling along some numbers $\delta_1>0, \; \delta_2>0, \; \epsilon > 0, \; a, \; M, \; \mathrm{and} \; L; \; \; \mathrm{and \; a \; function \;} f(x)$ all such that the two following inequalities:

$0 < |x-a| < \delta_1$

and

$0 < |x-a| < \delta_2$

such that these inequalities manage to produce/imply both of the following lines properly

$|f(x) - L|< \epsilon$

AND

$|f(x) - M| < \epsilon$

???????????????????????

What I mean is, what is it (exactly) about the epsilon-delta definition of a limit that ensures that we cannot use it to prove something as obviously false as the fact that for some function $f$, that if we have:

$\lim_{x \to a} f(x) = L$

and

$\lim_{x \to a} f(x) = M$

and if we consider both of the above limits to be from the same direction, what ensures that it must always be true that, in the above case, $L$ would equal $M$. "A function can approach one limit and one limit only from the same direction, at some point $a$" What, in the epsilon-delta definition, garuntees that the sentence before this one is true?

Any light that you can shed on these questions would be greatly appreciated.

10. Originally Posted by mfetch22
Thanks, I see it now. I'm still alittle confused about the exact definition of a limit. It just seems very vague in its description and usage, specifically in the freedoms allowed in the proofs involving the definition. But, I'm 100% certain that to an educated mathemetician, the definition of a limit is rigourously and sufficiently precise. But, it's still confusing me. Heres the two things that are mainly causing my confusion:

[1] In many proofs I've seen using the epsilon-delta definition of a limit, I often run into seemingly random stipulations. Further in the proof, it becomes obvious why these stipulations are useful (mainly it seems they are used to determine upper bounds of expressions), but it's not it's use that confuses me. Its the ability to make the stipulations at all. Consider this example (don't expect me to be thuroughly rigourus below, I'm simply giving an example) say I want to prove that:

$\lim_{x \to 3} [x^2 - 9 ] = 0$

.....
About the definition of the limit, it is a very very natural definition. As I have already said a limit exists if you can show me that you can get as close as you want to it, Read want I wrote in my earlier posts and compare it to the definition, (maybe it might help for you to draw a diagram of the regions represented to make it more clear).

Now for this example given here, what you are proving is slightly silly because the function under discussion is continuous (maybe you have not got onto this chapter yet, but by definition the limit of a continuous function at a point and its value are the same). This is how your present the argument for this question.

We are given $\epsilon$ and we want to find a $\delta$ such that when we are in the region $|x-3| < \delta$ we have $|x^2 - 9| < \epsilon$.

What we notice is that when we have x such that $|x-3| < \delta$ then $6 - \delta < |x+3| < 6 + \delta$ (this should be obvious, if it is not draw out the regions of do the algebra),

so we get $|x-3| < \delta \ \Rightarrow \ |x-3||x+3| < \delta ( 6 + \delta)$

Happy so far ? good,

now we want $|x-3| < \delta \ \Rightarrow \ |x-3||x+3| < \epsilon$

Remember we are working backwards here, we want to show that you any epsilon we can find delta i.e. give any epsilon we can find delta, so we need delta in terms of epsilon. so at this point you have two choice, you can either solve the quadratic equation $\epsilon = \delta ( 6 + \delta)$, or do what the author of your book did and treat the cases $\epsilon < 7$ and $\epsilon \geq 7$ separately in order to avoid solving the equation.

The only problem I see is that in your text the authord neglected the case $\epsilon \geq 7$, This cases is somewhat trivial though.

Originally Posted by mfetch22
[2] Secondly, since we can make these random stipulations however we may feel, and since we are only required to find some $\delta$ > 0, whos' to say we can't find some numbers $M$ and $L$ such that, for this supposed clever choice of $\delta$, whos to say we can't find these numbers to be two seperate numbers for the same limit??? I mean, this whole ablility to decide "what is less then what"; in however manner may be the most convinient at the time, seems to make no sense. How do we know for sure that, using this seemingly vague definition of a limit, we can't find some strange results? What is stopping us from stumbling along some numbers $\delta_1>0, \; \delta_2>0, \; \epsilon > 0, \; a, \; M, \; \mathrm{and} \; L; \; \; \mathrm{and \; a \; function \;} f(x)$ all such that the two following inequalities:

$0 < |x-a| < \delta_1$

and

$0 < |x-a| < \delta_2$

such that these inequalities manage to produce/imply both of the following lines properly

$|f(x) - L|< \epsilon$

AND

$|f(x) - M| < \epsilon$

???????????????????????

What I mean is, what is it (exactly) about the epsilon-delta definition of a limit that ensures that we cannot use it to prove something as obviously false as the fact that for some function $f$, that if we have:

$\lim_{x \to a} f(x) = L$

and

$\lim_{x \to a} f(x) = M$

and if we consider both of the above limits to be from the same direction, what ensures that it must always be true that, in the above case, $L$ would equal $M$. "A function can approach one limit and one limit only from the same direction, at some point $a$" What, in the epsilon-delta definition, garuntees that the sentence before this one is true?

Any light that you can shed on these questions would be greatly appreciated.

I am surprised you are asking this question, because any decent text on analysis will discuss this at some point. I'll briefly tell you how you deal with this.

$|L - M| = | L - M + f(x) - f(x) | = |(L - f(x)) + (f(x) - M)|$ now we apply the triangle inequality

$|L - M| < |f(x) - L| + |f(x) - M|$

I want for any epsilon to have $|f(x) - L| < \frac{\epsilon}{2}$ and $|f(x) - M| < \frac{\epsilon}{2}$ that is fine, because we know those limits exist so we get $\delta_1$ and $\delta_2$ such that:

$|x -a| < \delta_1 \ \Rightarrow \ |f(x) - L| < \frac{\epsilon}{2}$

$|x -a| < \delta_2 \ \Rightarrow \ |f(x) - M| < \frac{\epsilon}{2}$

and now we pick $\delta = Max( \delta_1 , \delta_2)$

giving $|x-a| < \delta \ \Rightarrow \ |L - M| < \frac{\epsilon}{2} + \frac{\epsilon}{2} < \epsilon$

so we can make the difference between L and M as small as we like.

11. Originally Posted by bobak
About the definition of the limit, it is a very very natural definition. As I have already said a limit exists if you can show me that you can get as close as you want to it, Read want I wrote in my earlier posts and compare it to the definition, (maybe it might help for you to draw a diagram of the regions represented to make it more clear).

Now for this example given here, what you are proving is slightly silly because the function under discussion is continuous (maybe you have not got onto this chapter yet, but by definition the limit of a continuous function at a point and its value are the same). This is how your present the argument for this question.

We are given $\epsilon$ and we want to find a $\delta$ such that when we are in the region $|x-3| < \delta$ we have $|x^2 - 9| < \epsilon$.

What we notice is that when we have x such that $|x-3| < \delta$ then $6 - \delta < |x+3| < 6 + \delta$ (this should be obvious, if it is not draw out the regions of do the algebra),

so we get $|x-3| < \delta \ \Rightarrow \ |x-3||x+3| < \delta ( 6 + \delta)$

Happy so far ? good,

now we want $|x-3| < \delta \ \Rightarrow \ |x-3||x+3| < \epsilon$

Remember we are working backwards here, we want to show that you any epsilon we can find delta i.e. give any epsilon we can find delta, so we need delta in terms of epsilon. so at this point you have two choice, you can either solve the quadratic equation $\epsilon = \delta ( 6 + \delta)$, or do what the author of your book did and treat the cases $\epsilon < 7$ and $\epsilon \geq 7$ separately in order to avoid solving the equation.

The only problem I see is that in your text the authord neglected the case $\epsilon \geq 7$, This cases is somewhat trivial though.

I am surprised you are asking this question, because any decent text on analysis will discuss this at some point. I'll briefly tell you how you deal with this.

$|L - M| = | L - M + f(x) - f(x) | = |(L - f(x)) + (f(x) - M)|$ now we apply the triangle inequality

$|L - M| < |f(x) - L| + |f(x) - M|$

I want for any epsilon to have $|f(x) - L| < \frac{\epsilon}{2}$ and $|f(x) - M| < \frac{\epsilon}{2}$ that is fine, because we know those limits exist so we get $\delta_1$ and $\delta_2$ such that:

$|x -a| < \delta_1 \ \Rightarrow \ |f(x) - L| < \frac{\epsilon}{2}$

$|x -a| < \delta_2 \ \Rightarrow \ |f(x) - M| < \frac{\epsilon}{2}$

and now we pick $\delta = Max( \delta_1 , \delta_2)$

giving $|x-a| < \delta \ \Rightarrow \ |L - M| < \frac{\epsilon}{2} + \frac{\epsilon}{2} < \epsilon$

so we can make the difference between L and M as small as we like.
Actually, my text book did cover this. And the proof they used was exactly the same as the proof you have supplied here. I was simply hoping that somebody would know of a different proof. I didn't understand this proof well. I can certainly follow the logic, but it wasn't "ringing clear" to me. I was trying to find a proof that would do that, so that I could ensure my understanding.

As for the first part of your post, THANK YOU! I don't know why I didn't consider that one could simply manipulate the delta itself, instead of plugging in some value and going from there. By keeping the variable $\delta$ within the steps you've used, everything seems to make so much more sense.

I appreciate you taking the time to help me through this epsilon-delta stuff, I'm sure its annoying to have to explain things that seem blantanly obvious to me; but like I said, I'm new to this stuff. And the advice you've given has helped, allot. The definition of a limit seems to be incredibly precise, now that I better understand it. Thanks again.

12. I do not think this question needs an epsilon-delta proof.
There is no need to reinvent the wheel.

Given $\displaystyle \lim _{x \to a} f(x) = M\;\& \,\lim _{x \to a} g(x) = N$.

From that we know at once $\displaystyle \lim _{x \to a} f(x) + g(x) = M + N\;\& \,\lim _{x \to a} \left| {f(x) - g(x)} \right| = \left| {M - N} \right|$

Why is that is sufficient for this problem.

13. Originally Posted by Plato
I do not think this question needs an epsilon-delta proof.
There is no need to reinvent the wheel.

Given $\displaystyle \lim _{x \to a} f(x) = M\;\& \,\lim _{x \to a} g(x) = N$.

From that we know at once $\displaystyle \lim _{x \to a} f(x) + g(x) = M + N\;\& \,\lim _{x \to a} \left| {f(x) - g(x)} \right| = \left| {M - N} \right|$

Why is that is sufficient for this problem.
There is no difference at all in the proofs, you have just used the result about limits of finite sums which requires a very similar epsilon-delta argument to prove.