Would this work?
You need the prove that the function is continuous in , that means you have to prove:
Choose arbitrary, we obtain:
You know that lies in a neighbourhoud of 1 thus choose an upperbound for so that you can get rid of the .
to continue, suppose that we require that no matter what δ ≤ 1/2. this means that x is between 1/2 and 3/2. hence |x| = x, and we have:
|x| ≥ 1/2, so that 1/|x| ≤ 2. if we also require that δ ≤ ε/2, we have:
|f(x) - 1| = |x-1|/|x| ≤ 2|x-1| < 2δ ≤ 2(ε/2) = ε. therefore, one possibility is: δ = min(1/2,ε/2).
(intuitively, you can see we want δ < 1, for if we let x get near 0, f(x) behaves very badly, and it will be hard to "make sure it's changing less than ε").
@Plato
Thanks for helping me see the problem a little better.
@Siron
I apologize, your statements was correct. I forgot about , and . I guess I had ran into the wall to many times and got discouraged. Lol! Also, your word choice caught me off guard; though, one day I will be able to correctly interpret statements in this manner by self studying. I'm currently studying Anatomy of Mathematics by R. B. Kershner for fun and hopefully I can get my hands on many more post-modern books of mathematics. Alot of 21st century books are not written with the same understadning!
i think it can be hard to see how "epsilon-delta" arguments capture the essence of continuity.
intuitively, we think of continuous functions as ones for which, if we only move "over" (left-or-right) a "little bit", we only move "up-or-down" a "little bit". perhaps a little more clearly, we mean if x is near the number a, then f(x) should be near the number f(a).
so one of the first things we do is quantify what we mean by "near". the distance between two numbers a and b (how far apart they are) can be expressed by |a-b|. so to say that x is near a is to say:
|x - a| < δ, where δ = "a small positive number".
we want to have this imply f(x) is near f(a), so:
|f(x) - f(a)| < ε, where ε = "another small number".
so why do we start with ε first, and THEN find δ? this is hard to explain. but the basic idea is: discontinuities can be very slight: the graph can be broken, but you might only see it under a magnifying glass, or a microscope. so we don't just want the difference between f(x) and f(a) to be "small", we want it to be arbitrarily small (constant functions are nice, they don't vary a bit, so f(x) - f(a) = 0, no matter what. but other functions usually vary a lot more than that. they might even go up and down very rapidly even as x travels a short distance, but we want to consider these as continuous, too).
so if we want the difference between f(x) and f(a) to be "arbitrarily small", we have to find a delta for EACH ε > 0 (especially the very tiny ones). note that the closer f(x) is to "flat", the bigger a delta we can use, since f(x) doesn't change very much even if x changes a LOT. so to ensure that "ε" is "arbitrarily small", we might not even need a "small" δ (but if a "big one" works, a "smaller" one will, too).
another way to look at this is: "how large a deviation in our input" (think of δ = "deviation") can we tolerate while still keeping the "error of our results" small (think of ε = "error")? for continuous functions, a small error in input, should mean a small error in output. functions that are discontinuous, like:
f(x) = -1, x < 0
f(x) = 1, x ≥ 0
fail this in a BIG way: we could move a TINY little bit left of 0 (like 0.000001), or the same tiny bit right of 0 (0.000001 again), and yet the difference of values is HUGE compared to the error of input (it is 2, which is 20,000,000 times the "delta". and making delta smaller doesn't help). you can see that if we pick 0 < ε < 2, we're not going to find ANY δ that works for a = 0.