Does pointwise convergence of continuous functions on a compact set to a continuous limit imply uniform convergence on that set?
I think yes.
Proof.
Suppose that is a metric space, let K be a compact subset of M, and define such that there exists a continuous function with pointwise.
By definitions, such that
Now, what I want to show is that such that , so an epsilon that doesn't depend on x.
But I find it difficult to work through this, perhaps my understanding of uniform convergence is still poor. Am I on the right track thou?
Thanks!
Okay, here is my formal proof.
Define by
Claim: This sequence of functions converges pointwise to
Proof.
For and , we would have .
Given , pick
For , we would have I'm stuck here, I know that this would converge to 0, but how would I prove that? What N I should pick to ensure this distance is less than epsilon?
Claim: This sequence of functions do not converge uniformly to 0.
Proof.
Pick , then , whenever and for each index n, we will have But doesn't this still converges to 0? Did I pick the wrong x?
Thanks!!!
Both parts of this are actually quite tricky.
Good start. So we now want to show that if 0<x<1 then as . As you can see, this is not obvious. One sneaky way to prove it is to notice that the series converges if |x|<1 (easily shown by using the ratio test), and therefore the n'th term of the series must tend to 0.
To say that a sequence of functions {f_n(x)} converges uniformly to zero is the same as saying that the maximum value of |f_n(x)| tends to 0 as n→∞. In this case, the functions are all non-negative, so we can drop the absolute value signs and ask whether the maximum of f_n(x) goes to 0. By basic calculus, the max value of in the unit interval occurs when the derivative is 0. This occurs when , at which point the value of the function is , which converges to 1/e as n→∞. Since this is greater than 0, the sequence of functions does not converge uniformly.
As I said, it's a tricky question.
Sorry, that's my mistake. In one of the comments above, I wrote when it should have been . (But you should have spotted that for yourself, because I explained that this is the value of x where the function has its maximum value. If you take the trouble to differentiate, you soon see that this happens when , which can equivalently be written . What I did was to write one of those expressions and then edit it to write the other one, but forgot to alter the n in the numerator.)
Wouldn't it be easier to apply the L'Hospital rule to show that nx^n converges to 0 as n approaches to infinity? For 0<x<1, we can always write x = 1/(1+a) for some positive number a. So, the limit of nx^n as n goes to infinity equals to the limit of n/(1+a)^n, which is in the infinity/infinity form. So we can use the L'Hospital's rule in this case, which gives us limit of 1/(n*(1+a)^n-1), by differentiating both the numerator and denominator. Thus, as n approaches to infinity, this will approach to 0. So, we get nx^n converges to 0. Is this correct?
There is certainly more than one way to show that when |x|<1. L'Hôpital's rule is one way to approach it, but you need to be more careful in applying it. When you say , you are differentiating the numerator as a function of n, and the denominator as a function of a. If you are using n as the variable, then . So the calculation should go like this: . That looks strange, because it's unusual to see n used as the name for a real variable rather than an integer. But the method is correct.