1. ## [SOLVED] Advanced help on Laplace transform

Dear friends,

I have seen some papers that refer the following lemma.

Lemma. Let $\displaystyle f$ be a nonnegative continuous function on the half-line $\displaystyle [0,\infty)$, and assume that the absissa of convergence $\displaystyle \sigma_{f}$ of the Laplace transform $\displaystyle F(s)$ of $\displaystyle f(t)$ is finite.
Then, $\displaystyle F(s)$ has a singularity at the point $\displaystyle s=\sigma_{f}$. More precisely, $\displaystyle {\color{red}{\lim\nolimits_{s\to\sigma_{f}}|F(s)|= \infty}}$.

It is said that the lemma can be found in the reference following, but I could not find it.

• D. V. Widder. An Introduction To Transform Theory. Academic Press, Newyork, 1971.

I really need the proof of this lemma, and don't know where to find it.
It would be very appreciated if someone can help me due to this direction.

Appendix.
See that the statement colored with red is wrong
See the proof

2. Originally Posted by bkarpuz
Dear friends,

I have seen some papers that refer the following lemma.

Lemma. Let $\displaystyle f$ be a nonnegative continuous function on the half-line $\displaystyle [0,\infty)$, and assume that the absissa of convergence $\displaystyle \sigma_{f}$ of the Laplace transform $\displaystyle F(s)$ of $\displaystyle f(t)$ is finite.
Then, $\displaystyle F(s)$ has a singularity at the point $\displaystyle s=\sigma_{f}$. More precisely, $\displaystyle \lim\nolimits_{s\to\sigma_{f}}|F(s)|=\infty$.

It is said that the lemma can be found in the reference following, but I could not find it.

• D. V. Widder. An Introduction To Transform Theory. Academic Press, Newyork, 1971.

I really need the proof of this lemma, and don't know where to find it.
It would be very appreciated if someone can help me due to this direction.
Hi,

What do you think of $\displaystyle f(t)=\frac{1}{1+t^2}$?

This function is positive, continuous on $\displaystyle [0,+\infty)$, its abscissa of convergence is 0, and yet its Laplace transform at 0 is finite. Thus you don't have $\displaystyle \lim_{s\to\sigma}|F(s)|=\infty$.

However, the first part is correct: 0 is a singularity for $\displaystyle F$, which means that $\displaystyle F$ cannot be extended analytically to a neighbourhood of 0.

This is very very much reminiscent of the following result. Consider the power series $\displaystyle F(z)=\sum_{n=0}^\infty a_n z^n$, of radius $\displaystyle 0<R<\infty$. Assume $\displaystyle a_n\geq 0$ for all $\displaystyle n$. Then $\displaystyle R$ is a singularity of $\displaystyle F$.

Actually, I noticed that the result about Laplace transforms can even be reduced to this one. Here is how. For simplicity (and wlog), assume $\displaystyle \sigma=-1$. Then the function $\displaystyle G(s)=\int_0^\infty f(t)e^{+st}dt$ (mind the + sign) is analytic on the subset $\displaystyle \{z\in\mathbb{C}|\Re(z)<1\}$ by definition of $\displaystyle \sigma$ (and because of the change of sign) and the radius of convergence at 0 is exactly 1. Let us write $\displaystyle G(s)=\sum_{n=0}a_n s^n$ inside the unit disk. Derivation in the integral gives $\displaystyle a_n = \frac{G^{(n)}(0)}{n!}=\int_0^\infty \frac{t^n}{n!}f(t)dt$, thus $\displaystyle a_n>0$. We deduce (by the above mentioned result) that $\displaystyle 1$ is a singularity for $\displaystyle G$, hence $\displaystyle -1(=\sigma)$ is a singularity for $\displaystyle F$.

As a conclusion, let me say a word about the (nice) proof of the result on power series. Take $\displaystyle R=1$ for simplicity. We procede by contradiction. Suppose that $\displaystyle F$ extends to a disk centered at 1. It is visibly obvious that the new domain (composed of the unit disk and this small disk around 1) contains a closed disk centered at $\displaystyle 1/2$ and of radius $\displaystyle \rho>1/2$. Thus $\displaystyle f$ is expandable in power series in this disk, hence if we choose $\displaystyle 1/2<r<\rho$, the series $\displaystyle \sum_{n=0}^\infty \frac{f^{(n)}(1/2)}{n!}r^n$ converges toward $\displaystyle \widetilde{f}\left(\frac{1}{2}+r\right)$ (where we're dealing with the extension $\displaystyle \widetilde{f}$ of $\displaystyle f$). Now comes the nice part. The above series is $\displaystyle \sum_{n=0}^\infty \frac{1}{n!}r^n\sum_{k=n}^\infty a_k k(k-1)\cdots (k-n+1)\left(\frac{1}{2}\right)^{k-n}$ and, because the coefficients are positive, we are allowed to change the order of summation, which gives $\displaystyle \sum_{k=0}^\infty a_k \sum_{n=0}^k \frac{k(k-1)\cdots (k-n+1)}{n!} r^n \left(\frac{1}{2}\right)^{k-n} = \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k$ (by Newton's binomial formula). Thus, we have proved that the series $\displaystyle \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k$ converges (toward the value $\displaystyle \widetilde{f}\left(\frac{1}{2}+r\right)$). This contradicts the fact that the radius of convergence is 1, since we could choose $\displaystyle r>1/2$.

3. Originally Posted by Laurent
Hi,

What do you think of $\displaystyle f(t)=\frac{1}{1+t^2}$?

This function is positive, continuous on $\displaystyle [0,+\infty)$, its abscissa of convergence is 0, and yet its Laplace transform at 0 is finite. Thus you don't have $\displaystyle \lim_{s\to\sigma}|F(s)|=\infty$.

However, the first part is correct: 0 is a singularity for $\displaystyle F$, which means that $\displaystyle F$ cannot be extended analytically to a neighbourhood of 0.

This is very very much reminiscent of the following result. Consider the power series $\displaystyle F(z)=\sum_{n=0}^\infty a_n z^n$, of radius $\displaystyle 0<R<\infty$. Assume $\displaystyle a_n\geq 0$ for all $\displaystyle n$. Then $\displaystyle R$ is a singularity of $\displaystyle F$.

Actually, I noticed that the result about Laplace transforms can even be reduced to this one. Here is how. For simplicity (and wlog), assume $\displaystyle \sigma=-1$. Then the function $\displaystyle G(s)=\int_0^\infty f(t)e^{+st}dt$ (mind the + sign) is analytic on the subset $\displaystyle \{z\in\mathbb{C}|\Re(z)<1\}$ by definition of $\displaystyle \sigma$ (and because of the change of sign) and the radius of convergence at 0 is exactly 1. Let us write $\displaystyle G(s)=\sum_{n=0}a_n s^n$ inside the unit disk. Derivation in the integral gives $\displaystyle a_n = \frac{G^{(n)}(0)}{n!}=\int_0^\infty \frac{t^n}{n!}f(t)dt$, thus $\displaystyle a_n>0$. We deduce (by the above mentioned result) that $\displaystyle 1$ is a singularity for $\displaystyle G$, hence $\displaystyle -1(=\sigma)$ is a singularity for $\displaystyle F$.

As a conclusion, let me say a word about the (nice) proof of the result on power series. Take $\displaystyle R=1$ for simplicity. We procede by contradiction. Suppose that $\displaystyle F$ extends to a disk centered at 1. It is visibly obvious that the new domain (composed of the unit disk and this small disk around 1) contains a closed disk centered at $\displaystyle 1/2$ and of radius $\displaystyle \rho>1/2$. Thus $\displaystyle f$ is expandable in power series in this disk, hence if we choose $\displaystyle 1/2<r<\rho$, the series $\displaystyle \sum_{n=0}^\infty \frac{f^{(n)}(1/2)}{n!}r^n$ converges toward $\displaystyle \widetilde{f}\left(\frac{1}{2}+r\right)$ (where we're dealing with the extension $\displaystyle \widetilde{f}$ of $\displaystyle f$). Now comes the nice part. The above series is $\displaystyle \sum_{n=0}^\infty \frac{1}{n!}r^n\sum_{k=n}^\infty a_k k(k-1)\cdots (k-n+1)\left(\frac{1}{2}\right)^{k-n}$ and, because the coefficients are positive, we are allowed to change the order of summation, which gives $\displaystyle \sum_{k=0}^\infty a_k \sum_{n=0}^k \frac{k(k-1)\cdots (k-n+1)}{n!} r^n \left(\frac{1}{2}\right)^{k-n} = \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k$ (by Newton's binomial formula). Thus, we have proved that the series $\displaystyle \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k$ converges (toward the value $\displaystyle \widetilde{f}\left(\frac{1}{2}+r\right)$). This contradicts the fact that the radius of convergence is 1, since we could choose $\displaystyle r>1/2$.
Laurent, ty again for your replies.
I give a link below, that you will see the mentioned lemma above is used therein.
Gyori, Ladas and Pakula, Oscillation theorems for delay differential equations via Laplace transforms, Canad. Math. Bull., vol. 33, no. 3, pp. 323--326, (1990).
However, they don't talk about the $\displaystyle \lim$ situation here, but in their book they do.

May be they wanted to say $\displaystyle \lim\nolimits_{\stackrel{{\mathcal Re}(s)<\sigma_{f}}{s\to\sigma_{f}}}|F(s)|=\infty$.

Thanks.

4. By the way, when we define
$\displaystyle F(s):=\int_{0}^{\infty}f(t)\mathrm{e}^{-st}\mathrm{d}t,$
since we will approach to the absissa of covnergence $\displaystyle s=\sigma_{f}$ from the left, the power series will have the form
$\displaystyle F(s)=\sum_{k\in\mathbb{N}_{0}}\bigg(\frac{(-1)^{k}}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e}^{-\sigma_{f}t}\mathrm{d}t\bigg)(s-\sigma_{f})^{k}$
$\displaystyle {\color{white}{F(s)}}=\sum_{k\in\mathbb{N}_{0}}\bi gg(\frac{1}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e }^{-\sigma_{f}t}\mathrm{d}t\bigg)(\sigma_{f}-s)^{k}$
for $\displaystyle s<\sigma_{f}$, where the coefficients are positive.
Then, the rest follows from what you have mentioned for power series, right?

5. Originally Posted by bkarpuz
Consider again $\displaystyle X(s)=\int_0^\infty \frac{e^{-st}}{1+t^2}dt$. We have $\displaystyle \sigma_0=0$. And we have a uniform bound: for any $\displaystyle s\in\mathbb{C}$ such that $\displaystyle \Re(s)\geq 0$, we have $\displaystyle \left|\frac{e^{-st}}{1+t^2}\right|=\frac{e^{-\Re(st)}}{1+t^2}\leq \frac{1}{1+t^2}$. Thus, $\displaystyle |X(s)|\leq \int_0^\infty\frac{dt}{1+t^2}<\infty$ is bounded on this half-plane!

I can't see my mistake; this would mean that this more precise conclusion is again wrong (?)...

If $\displaystyle X(\alpha_0)=+\infty$, this is however correct, and simple. It suffices to consider limits along the real axis. Indeed, $\displaystyle s\mapsto X(s)$ decreases, therefore either it diverges or it is bounded toward $\displaystyle \sigma_0$. Assume by contradiction that $\displaystyle X(s)\leq M$ for all $\displaystyle s>\sigma_0$. Then, for all $\displaystyle A>0$, for all $\displaystyle s>\sigma_0$, $\displaystyle \int_0^A f(t)e^{-st}dt\leq M$. For a fixed $\displaystyle A$, we can take the limit $\displaystyle s\to\sigma_0$, and get: $\displaystyle \int_0^A f(t)e^{-\sigma_0 t}dt\leq M$. Therefore, $\displaystyle \int_0^\infty f(t) e^{-\sigma_0 t} dt\leq M <\infty$, in contradiction with $\displaystyle X(\alpha_0)=+\infty$.

Originally Posted by bkarpuz
By the way, when we define
$\displaystyle F(s):=\int_{0}^{\infty}f(t)\mathrm{e}^{-st}\mathrm{d}t,$
since we will approach to the absissa of covnergence $\displaystyle s=\sigma_{f}$ from the right, the power series will have the form
$\displaystyle F(s)=\sum_{k\in\mathbb{N}_{0}}\bigg(\frac{(-1)^{k}}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e}^{-\sigma_{f}t}\mathrm{d}t\bigg)(s-\sigma_{f})^{k}$
$\displaystyle {\color{white}{F(s)}}=\sum_{k\in\mathbb{N}_{0}}\bi gg(\frac{1}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e }^{-\sigma_{f}t}\mathrm{d}t\bigg)(\sigma_{f}-s)^{k}$
for $\displaystyle s<\sigma_{f}$, where the coefficients are positive.
Then, the rest follows from what you have mentioned for power series, right?
This is not correct, since we don't know a priori that we can expand in power series at $\displaystyle \sigma_f$ (and a posteriori we can't). In order to reduce to what I did for $\displaystyle \sigma=-1$, you can look at $\displaystyle F(\sigma+1+s)$, i.e. change $\displaystyle f$ into $\displaystyle f(t)e^{-(\sigma+1)t}$. Then the abcissa is -1.

Or, plainly, it corresponds to expanding in power series at $\displaystyle \sigma+1$, where the radius of convergence is 1. Inside the circle of center $\displaystyle \sigma+1$ and radius 1 (and only there), $\displaystyle F(s)=\sum_{k\in\mathbb{N}_{0}}\bigg(\frac{(-1)^{k}}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e}^{-(\sigma_{f}+1)t}\mathrm{d}t\bigg)(s-(\sigma_{f}+1))^{k}$ and you continue the same: I want the singularity on the right of the circle, hence the change of sign, which amounts here to changing $\displaystyle s-(\sigma_{f}+1)$ into the opposite, as you did.

6. Originally Posted by Laurent
Consider again $\displaystyle X(s)=\int_0^\infty \frac{e^{-st}}{1+t^2}dt$. We have $\displaystyle \sigma_0=0$. And we have a uniform bound: for any $\displaystyle s\in\mathbb{C}$ such that $\displaystyle \Re(s)\geq 0$, we have $\displaystyle \left|\frac{e^{-st}}{1+t^2}\right|=\frac{e^{-\Re(st)}}{1+t^2}\leq \frac{1}{1+t^2}$. Thus, $\displaystyle |X(s)|\leq \int_0^\infty\frac{dt}{1+t^2}<\infty$ is bounded on this half-plane!

I can't see my mistake; this would mean that this more precise conclusion is again wrong (?)...

If $\displaystyle X(\alpha_0)=+\infty$, this is however correct, and simple. It suffices to consider limits along the real axis. Indeed, $\displaystyle s\mapsto X(s)$ decreases, therefore either it diverges or it is bounded toward $\displaystyle \sigma_0$. Assume by contradiction that $\displaystyle X(s)\leq M$ for all $\displaystyle s>\sigma_0$. Then, for all $\displaystyle A>0$, for all $\displaystyle s>\sigma_0$, $\displaystyle \int_0^A f(t)e^{-st}dt\leq M$. For a fixed $\displaystyle A$, we can take the limit $\displaystyle s\to\sigma_0$, and get: $\displaystyle \int_0^A f(t)e^{-\sigma_0 t}dt\leq M$. Therefore, $\displaystyle \int_0^\infty f(t) e^{-\sigma_0 t} dt\leq M <\infty$, in contradiction with $\displaystyle X(\alpha_0)=+\infty$.

This is not correct, since we don't know a priori that we can expand in power series at $\displaystyle \sigma_f$ (and a posteriori we can't). In order to reduce to what I did for $\displaystyle \sigma=-1$, you can look at $\displaystyle F(\sigma+1+s)$, i.e. change $\displaystyle f$ into $\displaystyle f(t)e^{-(\sigma+1)t}$. Then the abcissa is -1.

Or, plainly, it corresponds to expanding in power series at $\displaystyle \sigma+1$, where the radius of convergence is 1. Inside the circle of center $\displaystyle \sigma+1$ and radius 1 (and only there), $\displaystyle F(s)=\sum_{k\in\mathbb{N}_{0}}\bigg(\frac{(-1)^{k}}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e}^{-(\sigma_{f}+1)t}\mathrm{d}t\bigg)(s-(\sigma_{f}+1))^{k}$ and you continue the same: I want the singularity on the right of the circle, hence the change of sign, which amounts here to changing $\displaystyle s-(\sigma_{f}+1)$ into the opposite, as you did.
Okay, I now got it well.
We may expand $\displaystyle F$ to Taylor series at $\displaystyle \sigma_{f}+\varepsilon$ for some arbitrarily fixed $\displaystyle \varepsilon>0$ and check $\displaystyle F$ at $\displaystyle \sigma_{f}-\delta$ for any $\displaystyle \delta>0$.
By the way, I did not imply that you do anything wrong, I just wanted to show you the references.
Thanks for the discussion.

7. Originally Posted by Laurent
...
Now comes the nice part. The above series is $\displaystyle \sum_{n=0}^\infty \frac{1}{n!}r^n\sum_{k=n}^\infty a_k k(k-1)\cdots (k-n+1)\left(\frac{1}{2}\right)^{k-n}$ and, because the coefficients are positive, we are allowed to change the order of summation, which gives $\displaystyle \sum_{k=0}^\infty a_k \sum_{n=0}^k \frac{k(k-1)\cdots (k-n+1)}{n!} r^n \left(\frac{1}{2}\right)^{k-n} = \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k$ (by Newton's binomial formula).
...
Hi Laurent again,

What is the trick here that positive coefficients do?
Just allows the reversal of the summations?

8. Originally Posted by bkarpuz
Hi Laurent again,

What is the trick here that positive coefficients do?
Just allows the reversal of the summations?
Yes, that's exactly it. It is sometimes called "Fubini-Tonelli theorem": in a double series (or integral), the order of summation doesn't matter if the coefficients are nonnegative.

Since this is the crucial step, this proof highlights the importance of always checking whether we are allowed to interchange two sums!

9. ## Proofs

Singularity (at right end point) of power series involving positive coefficients.

Assume that the power series $\displaystyle \textstyle\sum\nolimits_{k\in\mathbb{N}_{0}}a_{k}x ^{k}$ converges for all $\displaystyle x\in(a,b)$ with $\displaystyle b>0>a$ such that it is no further possible to extend the region of convergence to $\displaystyle (a,c)$ for any $\displaystyle c\in(b,\infty)$, and that the sequence $\displaystyle \{a_{k}\}_{k\in\mathbb{N}_{0}}$ be a nonnegative sequence of reals. We shall prove that $\displaystyle f(x):=\textstyle\sum\nolimits_{k\in\mathbb{N}_{0}} a_{k}x^{k}$ is not analytic at $\displaystyle x=b$. To achieve a contradiction suppose not, then $\displaystyle f$ analytic at some $\displaystyle 2\delta$ ($\displaystyle \delta>0$ and $\displaystyle b-2\delta>0$) neighborhood of $\displaystyle b$, i.e., $\displaystyle f$ can be differentiable infinitely many times at $\displaystyle b-\delta$. Hence, we have
$\displaystyle f(b+\delta)=\sum_{l\in\mathbb{N}_{0}}\frac{f^{(l)} (b-\delta)}{l!}\big((b+\delta)-(b-\delta)\big)^{l}=\sum_{l\in\mathbb{N}_{0}}\frac{1} {l!}\bigg(\sum_{k=l}^{\infty}a_{k}k^{(l)}b-\delta)^{k-l}\bigg)(2\delta)^{l}$
............$\displaystyle =\sum_{k\in\mathbb{N}_{0}}a_{k}\bigg(\sum_{l=0}^{k }\frac{1}{l!}k^{(l)}(b-\delta)^{k-l}(2\delta)^{l}\bigg)=\sum_{k\in\mathbb{N}_{0}}a_{ k}\bigg(\sum_{l=0}^{k}\binom{k}{l}(b-\delta)^{k-l}(2\delta)^{l}\bigg)$
............$\displaystyle =\sum_{k\in\mathbb{N}_{0}}a_{k}(b+\delta)^{k},$
where the factorial function defined by $\displaystyle k^{(l)}:=k(k-1)\cdots(k-l+1)$ is used in the first line, which implies that the series $\displaystyle f$ converges at $\displaystyle b+2\delta>0$. This is a contradiction to the definition of $\displaystyle b$ (being the radius of convergence), and thus $\displaystyle f$ must be singular at $\displaystyle b$. Since the terms in the sums above are nonnegative reversal of the order of the sums are allowed due to Fubini-Tonelli theorem (See Fubini's theorem - Wikipedia, the free encyclopedia).

Example. Consider
$\displaystyle \sum_{k\in\mathbb{N}_{0}}x^{2k}=\frac{1}{1-x^{2}}$ for $\displaystyle x\in(-1,1)$.

Singularity (at left end point) of power series involving sign alternating coefficients.

Assume that the power series $\displaystyle \textstyle\sum\nolimits_{k=0}^{\infty}a_{k}x^{k}$ converges for all $\displaystyle x\in(a,b)$ with $\displaystyle a<0<b$. Also assume that $\displaystyle a$ can not be replaced by a smaller one, and that the sequence $\displaystyle \{a_{k}\}_{k\in\mathbb{N}_{0}}$ satisfies $\displaystyle (-1)^{k}a_{k}\geq0$ for all $\displaystyle k\in\mathbb{N}_{0}$. We shall prove that $\displaystyle f(x):=\textstyle\sum\nolimits_{k\in\mathbb{N}_{0}} a_{k}x^{k}$ is not analytic at $\displaystyle x=a$. For the sake of contradiction suppose not, then $\displaystyle f$ analytic at some $\displaystyle 2\delta$ ($\displaystyle \delta>0$ with $\displaystyle a+2\delta<0$) neighborhood of $\displaystyle a$, i.e., $\displaystyle f$ has derivatives of all order at $\displaystyle a+\delta$. Hence, we have
$\displaystyle f(a-\delta)=\sum_{l\in\mathbb{N}_{0}}\frac{f^{(l)}(a+\ delta)}{l!}((a-\delta)-(a+\delta))^{l}$$\displaystyle =\sum_{l\in\mathbb{N}_{0}}\frac{1}{l!}\bigg(\sum_{ k=l}^{\infty}a_{k}k^{(l)}(a+\delta)^{k-l}\bigg)(-2\delta)^{l} ................\displaystyle =\sum_{l\in\mathbb{N}_{0}}\frac{1}{l!}\bigg(\sum_{ k=l}^{\infty}(-1)^{k}a_{k}k^{(l)}(-(a+\delta))^{k-l}\bigg)(2\delta)^{l}$$\displaystyle =\sum_{k\in\mathbb{N}_{0}}(-1)^{k}a_{k}\bigg(\sum_{l=0}^{k}\frac{1}{l!}k^{(l)} (-(a+\delta))^{k-l}(2\delta)^{l}\bigg)$
................$\displaystyle =\sum_{k\in\mathbb{N}_{0}}(-1)^{k}a_{k}\bigg(\sum_{l=0}^{k}\binom{k}{l}(-(a+\delta))^{k-l}(2\delta)^{l}\bigg)$$\displaystyle =\sum_{k\in\mathbb{N}_{0}}(-1)^{k}a_{k}(\delta-a)^{k}$
................$\displaystyle =\sum_{k\in\mathbb{N}_{0}}a_{k}(a-\delta)^{k},$
which implies that the series $\displaystyle f$ converges at $\displaystyle a-\delta<a$. This is also a contradiction. It should also be noticed that in the second line above that the terms of the sums are nonnegative.

Example. Consider
$\displaystyle \sum_{k\in\mathbb{N}_{0}}(-1)^{k}x^{k}=\frac{1}{1+x}$ for $\displaystyle x\in(-1,1)$.

Singularity (at the abscissa of convergence) of the Laplace transform of a nonnegative function.

We are now back to our original problem. Let $\displaystyle f\in\mathrm{C}(\mathbb{R}_{0}^{+},\mathbb{R}_{0}^{ +})$ and $\displaystyle F$ be the Laplace transform of $\displaystyle f$ with a finite abscissa of convergence $\displaystyle \sigma_{f}$, i.e, $\displaystyle \sigma_{f}:=\inf\{s\in\mathbb{R}:F(s)\ \text{exists}\}$. Then, we shall show that $\displaystyle F$ is not analytic at $\displaystyle \sigma_{f}$. If not, there exists $\displaystyle \delta>0$ such that $\displaystyle F$ is analytic in $\displaystyle (\sigma_{f}-4\delta,\sigma_{f}+4\delta)$. Setting $\displaystyle G_{\delta}(s):=F(s+\sigma_{f}+\delta)$ for $\displaystyle s\in\mathbb{R}$, we learn that $\displaystyle G_{\delta}$ is analytic in $\displaystyle (-3\delta,3\delta)$ and
$\displaystyle (-1)^{k}G_{\delta}^{(k)}(s)=\int_{0}^{\infty}t^{k}f( t)\mathrm{e}^{-(\sigma_{f}+\delta)t}\mathrm{e}^{-st}\mathrm{d}t\geq0$ for all $\displaystyle s\in(-3\delta,3\delta)$ and all $\displaystyle k\in\mathbb{N}_{0}.$
Clearly, we also have
$\displaystyle G_{\delta}(s)=\sum_{k\in\mathbb{N}_{0}}\frac{G_{\d elta}^{k}(0)}{k!}s^{k}$ for all $\displaystyle s\in(-3\delta,3\delta),$
which is a power series with sign alternating coefficients. Following similar arguments to that of discussion in the previous subsection, we obtain that the power series converges $\displaystyle G_{\delta}$ converges at $\displaystyle -2\delta$, i.e., $\displaystyle F(\sigma_{f}-\delta)$ exists. We are led to a contradiction again. The claim is hence proved.
Note.$\displaystyle {\color{white}{.}}^{1}$ We could also let here $\displaystyle G(s):=F(\sigma_{f}+\delta-s)$ for $\displaystyle s\in\mathbb{R}$ to use the arguments of in the first section.

Acknowledgements. Thanks to Laurent for his fruitful discussion about the subject.

__________________________________________________
$\displaystyle {\color{white}{.}}^{1}$ This note is due to Laurent.