Please explain why limits exist or not-DO NOT use ANY adv method to solve it

Hi, I 've just started Pre-calculus this semester, wonder-can anyone explain to me why the following limits exist or not? Please __DO NOT use ANY advanced mathematical methods to solve it__.

No.1

The answer is Does not exist. Please explain why.

No.2

The answer given by tutor is "it has NO limit since |-2|> 1". Why |-2|> 1?Why NOT "-2 < 1"? I don't understand the whole statement, especially |-2|> 1.

Re: Please explain why limits exist or not-DO NOT use ANY adv method to solve it

In the first, each subsequent term will change sign as you multiply by -1 so you will never stop switching between 1 and -1

In the second question, the lines outside the -2 mean that you take the absolute value. This means that you take the magnitude or just ignore the sign and make it positive. This is why it is greater than 1 and also why the terms in the sequence will just get bigger and bigger, hence no limit.

Re: Please explain why limits exist or not-DO NOT use ANY adv method to solve it

Hi Thanks for your reply.

Let me recap:

__For No.1,__

will __alternate__ between (+1) and (-1) when " " is an "__even__" or an "__odd__" number respectively. However, when " " approaches "BIGGER" number, the same alternate sequence will continue [i.e. (+1) and (-1)] __REGARDLESS__ of how "BIG" " " is.That's why NO limit exists for .

Am I correct?

__For No. 2,__

I don't quite understand when you mentioned " the lines outside the -2 mean that you take the absolute value. This means that you take the magnitude or just ignore the sign and make it positive. This is why it is greater than 1".

Please correct my understanding/explanation:

- if we use the same mathematical deduction as in No. 1 for : will also alternate between "__positive__" and "__negative__" signs when " " is an "__even__" or an "__odd__" number respectively. And this alternate sequence of change in sign will continue [i.e. positive and negative] __REGARDLESS__ of how "BIG" " " is. From this, I can say that "No limit exists for .

However, when is DIVIDED by , i.e. :

When " " approaches "BIGGER" number, will also approach either (+) or (-) "BIGGER" number since the increment in numerator : is __FASTER__ than it's denominator : " ".

So, from the above, I can say, when " " approaches "BIGGER" number, will also approach either (+) or (-) "BIGGER" number and as such, there is No Limit for .

I still DO NOT understand the answer given by tutor -"it has NO limit since ".

What does " " has to do with the Limit for the question - ?

Does this also mean, if , __DOES HAVE LIMIT __?

Can some1 enlighten me on this question?

Re: Please explain why limits exist or not-DO NOT use ANY adv method to solve it

What the tutor is saying is that if the absolute value of the -2 was less than 1 then the numerator would not grow faster than the denominator, but would actually get smaller and then you would approach 0

Re: Please explain why limits exist or not-DO NOT use ANY adv method to solve it

Re: Please explain why limits exist or not-DO NOT use ANY adv method to solve it

The two problems you have given are examples of sequences. Sequences are just functions with the natural numbers as domain, for example, your first one is the function:

f(n) = (-1)^{n}. It is customary to write f(n) as a_{n}, called "the n-th TERM" of the sequence. Sequences are also written like this:

a_{1},a_{2},a_{3},.... (where the dots mean we continue indefinitely).

We say the limit of a sequence exists if, given N large enough (it may have to be very large, like say, a million), a_{N} is very close to "some number" (the number we are calling "the limit", or "limiting value"). The way we actually PROVE this, is to pick some very SMALL positive number (traditionally called "epsilon"), and show that given that small number, we can pick N so that for every term past a_{N}, a_{n} is "within epsilon" of our limit (in other words, we have to have some idea what the limit is, first...and it can be challenging to find it).

A good example is the sequence:

1/2,3/4,7/8,15/16,.....

also written as:

a_{n} = (2^{n}-1)/2^{n}.

It would appear the terms of this sequence are getting "closer and closer" to 1, so we might suspect 1 is actually the limit.

SO we pick our "small number", ε. Now we need to find a "suitably large" integer N. Well, if ε is small, then 1/ε will be pretty big. Let's see what happens if we pick N > 1/ε.

Then:

1 - [(2^{N}-1)/2^{N}] = 2^{N}/2^{N} - [(2^{N}-1)/2^{N}]

= 1/2^{N}, and since 2^{N} > N,

1/2^{N} < 1/N = 1/(1/ε) = ε.

So that N will do just fine. If we pick ε "very tiny", we can pick N "very huge", and get "as close as we like" to 1. That is what we MEAN when we say:

.

I know this sounds kind of technical, but we need some precise way of justifying the statement that the sequence:

1/2,3/4,7/8,15/16,....

"approaches 1", which is otherwise rather vague.

Before I continue, let's look at a sequence that HAS no limit:

1,2,3,....

or:

a_{n} = n.

Let's suppose that we had a limit, call it b, and see what goes wrong. If we did, given our small number ε, we can pick N so that for ANY a_{n}, with n > N,

|a_{n} - b| < ε.

(We use the absolute value signs because we're only interested in the SIZE of the difference, not whether or not it's "slightly more" or slightly less").

Well, even though it's not very small, we might pick ε = 1/2. Now if we had such a huge number N, we would have:

|a_{n} - b| < 1/2, for all n > N.

In particular, we would have:

|a_{N+1} - b| < 1/2, and |a_{N+2} - b| < 1/2.

And THIS would mean:

|a_{N+2} - a_{N+1}| = |a_{N+2} - b - (a_{N+1} - b)| ≤ |a_{N+2} - b| + |a_{N+1} - b| < 1/2 + 1/2 = 1.

(since |X + Y| ≤ |X| + |Y|, this is called the triangle inequality, and you'll need to get familiar with it later).

But...

|a_{N+2} - a_{N+1}| = |(N+2) - (N+1)| = |N + 2 - N - 1| = |2 - 1| = |1| = 1.

So the difference of two successive terms is both < 1 and = 1. That doesn't make sense, so we must not have a limit after all (the above is called a "reducio ad absurdum" or "argument by contradiction").

Now, let's look at YOUR two sequences:

The first is:

-1,1,-1,1,-1,1,......

We're not getting "close" to anything...in fact, each successive term is 2 away from the previous one. Any ε with 0 < ε < 1 isn't going to work at ALL, if we pick our limit close to 1, the next term is "too far away", if we pick our limit close to -1, we have the same problem, and if we "split the difference" and pick a limit near 0, "all the terms" are too far away.

The next one is a bit more subtle:

Let's look at the first few terms:

-2,2,-8/3,4,-32/5,32/3,-64/7,16,....

Admittedly, we get some nasty-looking fractions, but let's just look at what happens when n is a power of 2, say n = 2^{k}. We get the sub-sequence:

b_{k} = (-2)^{2k}/2^{k}

= 2^{2k}/2^{k} (since 2^{k} is going to be an even number)

= 2^{2k-k}.

Now 2^{k} is going to get LOTS bigger than k, so 2^{k} - k is going to be some positive number than keeps getting bigger, and raising -2 to this power is going to be even bigger still.

So every 2^{k} terms, we are going to have an integer in our sequence which is getting quite big. In fact, we continue our sequence long enough, we can make it bigger than any number we like, which is the "exact opposite" of "getting close to something" (the technical math-y way of saying this is: the sequence is "unbounded").

Now, this may seem like a lot of stuff to throw at you, but I assure you, when it comes time for ACTUAL calculus, you will do stuff like this AND MORE.

For example, consider the question:

"What is a decimal approximation of a fraction?"

Let's take a nice simple fraction, like 1/3.

We get the successive approximations:

0.3

0.33

0.333

0.3333 etc.

As you can see, this is a sequence, and as you may suspect, it's LIMIT is 1/3. That's right: decimal approximations (of things like pi, or e, or √2) actually involve LIMITS of rational sequences. And THAT, my friend, is what makes "real numbers" so useful, and hair-pulling, all at the same time. They are limits. We can't actually even begin to write down most real numbers, what we CAN write down is "approximations" of them, and to even talk about what that even MEANS, we need all this ugly machinery of "small numbers", "large numbers", and "close enough" (the founders of calculus used terms like "sufficiently small" and "infinitesimally" which made people suspect they were just "making it all up").