1. ## integration of a summation

Does $\displaystyle \int_{0}^{\infty} \frac{te^{-t}}{1+e^{-t}} \ dt = \int_{0}^{\infty} t \sum_{k=1}^{\infty} (-1)^{k-1} e^{-kt} \ dt$ just because $\displaystyle e^{-t} <1$ for $\displaystyle 0 <t<\infty$?

Or is it also because the geometric series converges uniformly?

And does it matter that $\displaystyle t=0$ is not in the interval?

What about $\displaystyle \int^{\infty}_{0} \sum_{k=1}^{\infty} \frac{(-1)^{k}}{k} e^{-kt} \ dt = \sum_{k=1}^{\infty} \frac{(-1)^{k-1}}{k} \int_{0}^{\infty} e^{-kt} \ dt$?

Does it suffice to show that $\displaystyle \sum^{\infty}_{k=1} \frac{(-1)^{k-1}}{k} e^{-kt}$ converges uniformly on $\displaystyle 0<t<\infty$? Is there a simpler justification?

I got stuck using the Weierstrass M-test.

Let $\displaystyle f_{n}(x) = \frac{(-1)^{k-1}}{k} \ e^{-kt}$

then $\displaystyle |f_{n}(x)| = \frac{e^{-kt}}{k} < 1/k$ for $\displaystyle 0<t<\infty$

But $\displaystyle \sum \frac{1}{k}$ diverges. So that didn't work.

2. For your first question, you do not need uniform convergence because it is just rewriting the integrand. However, you need to be careful about t=0 since the summation diverges, but this can be fixed by using a limit.

For the next question. The Weierstrass M-test fails because you used the weaker condition $\displaystyle \frac{e^{-kt}}{k} < 1/k$.
Why not try the condition $\displaystyle \frac{e^{-kt}}{k} \leq e^{-kt}$.

3. So I should write it as $\displaystyle \lim_{b \to 0} \int_{b}^{\infty} t \sum_{k=1}^{\infty} (-1)^{k-1} e^{-kt} \ dt$ ?

For the second one, $\displaystyle e^{-kt} < e^{-k(0)} < 1$ ? What am I not understanding?

4. Originally Posted by Random Variable
So I should write it as $\displaystyle \lim_{b \to 0} \int_{b}^{\infty} t \sum_{k=1}^{\infty} (-1)^{k-1} e^{-kt} \ dt$ ?
Yes, you can do this, but then you have to prove the integrals are the same. You can directly fill in the limit and define the integrand to be 0 at that point.

For the second one, $\displaystyle e^{-kt} < e^{-k(0)} < 1$ ? What am I not understanding?
Yes, you are correct, but whenever you use an inequality, you make an overestimate.
For example, $\displaystyle e^{-kt} < 1 < t + 100$ also, but if you use this then it clearly diverges.
If you bound more conservatively the test will pass.
My hint was to use this tighter bound: $\displaystyle \frac{e^{-kt}}{k} \leq e^{-kt}$

5. I can't say $\displaystyle e^{-kt} < e^{-k}$ because that's not true for $\displaystyle 0<t<1$. I'm lost.

6. $\displaystyle \frac{1}{k} \leq 1$ for $\displaystyle k \geq 1$ right?

Multiply both sides by $\displaystyle e^{-kt}$ which should be greater than zero. You get

$\displaystyle \frac{e^{-kt}}{k} \leq e^{-kt}$

Right?

7. I understand that $\displaystyle \frac{e^{-kt}}{k} \le e^{-kt}$. But for $\displaystyle 0<t<\infty$, $\displaystyle e^{-kt}$ is the largest when $\displaystyle t=0$. That's why I said $\displaystyle e^{-kt} \le e^{-k(0)} = 1$ But then the test fails again.

8. [Ignore this, this is incorrect]

For $\displaystyle 0<t<\infty$, $\displaystyle e^{-kt} = (e^{-t})^k$
And $\displaystyle e^{-t} < 1$

Notice the strict less than. If it is stricly less than, then the summation is just a geometric series.

9. Actually Random Variable, I was wrong.
You are correct, the tightest bound would be 1, and this would cause the series to diverge.

The conclusion is that you cannot use the Weierstrass M-Test for this case. However, the Weierstrass M-Test is only a sufficient condition, so it can only show uniform convergence, but does not disprove it. This means that you need to prove uniform convergence some other way, but since it is alternating, this makes it easier (what is the maximum error at step n for an alternating sequence that converges). You can apply the Cauchy Criterion.

10. I was looking to find an upper bounded on the sequence of functions that was independent of t. But that's not necessary?

11. How about the following using the formal definition of uniform convergence?

Given any $\displaystyle \epsilon > 0$ and any $\displaystyle t \in (0,\infty)$

Let $\displaystyle N = \frac{1}{\epsilon}$

Then for $\displaystyle n > N, |f_{n}(x)-f(x)| = |(-1)^{n} \ \frac{e^{-nt}}{n} - 0| = \frac{e^{-nt}}{n}$ $\displaystyle < \frac{1}{n} < \frac{1}{N} = \epsilon$

12. Where does x come from? And what is your f(x)?
Careful, you are trying to show the summation of f_n converges uniformly to an f.
Try to use the Cauchy Criterion.

13. Sorry. It should be $\displaystyle f_{n}(t)$ and $\displaystyle f(t) = \lim_{ n \to \infty} f_{n}(t) = 0$

14. f(t) should be the limit of a *series* (should have a summation somewhere) not the limit of a sequence.

15. I was using the definition for the uniform convergence of a sequence. The formal definition of the uniform convergence of a series won't be helpful because I don't know to what function the series is converging. And isn't Cauchy's criterion useful for showing that a series doesn't converge uniformly, not for proving that it does?

Page 1 of 2 12 Last