Results 1 to 9 of 9

Math Help - Advanced help on Laplace transform

  1. #1
    Senior Member bkarpuz's Avatar
    Joined
    Sep 2008
    From
    R
    Posts
    481
    Thanks
    2

    Thumbs up [SOLVED] Advanced help on Laplace transform

    Dear friends,

    I have seen some papers that refer the following lemma.

    Lemma. Let f be a nonnegative continuous function on the half-line [0,\infty), and assume that the absissa of convergence \sigma_{f} of the Laplace transform F(s) of f(t) is finite.
    Then, F(s) has a singularity at the point s=\sigma_{f}. More precisely, {\color{red}{\lim\nolimits_{s\to\sigma_{f}}|F(s)|=  \infty}}.


    It is said that the lemma can be found in the reference following, but I could not find it.

    • D. V. Widder. An Introduction To Transform Theory. Academic Press, Newyork, 1971.

    I really need the proof of this lemma, and don't know where to find it.
    It would be very appreciated if someone can help me due to this direction.

    Appendix.
    See that the statement colored with red is wrong
    See the proof
    Last edited by bkarpuz; September 12th 2009 at 04:35 AM. Reason: Appendix is added after the problem is solved
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by bkarpuz View Post
    Dear friends,

    I have seen some papers that refer the following lemma.

    Lemma. Let f be a nonnegative continuous function on the half-line [0,\infty), and assume that the absissa of convergence \sigma_{f} of the Laplace transform F(s) of f(t) is finite.
    Then, F(s) has a singularity at the point s=\sigma_{f}. More precisely, \lim\nolimits_{s\to\sigma_{f}}|F(s)|=\infty.


    It is said that the lemma can be found in the reference following, but I could not find it.

    • D. V. Widder. An Introduction To Transform Theory. Academic Press, Newyork, 1971.

    I really need the proof of this lemma, and don't know where to find it.
    It would be very appreciated if someone can help me due to this direction.
    Hi,

    What do you think of f(t)=\frac{1}{1+t^2}?

    This function is positive, continuous on [0,+\infty), its abscissa of convergence is 0, and yet its Laplace transform at 0 is finite. Thus you don't have \lim_{s\to\sigma}|F(s)|=\infty.

    However, the first part is correct: 0 is a singularity for F, which means that F cannot be extended analytically to a neighbourhood of 0.


    This is very very much reminiscent of the following result. Consider the power series F(z)=\sum_{n=0}^\infty a_n z^n, of radius 0<R<\infty. Assume a_n\geq 0 for all n. Then R is a singularity of F.


    Actually, I noticed that the result about Laplace transforms can even be reduced to this one. Here is how. For simplicity (and wlog), assume \sigma=-1. Then the function G(s)=\int_0^\infty f(t)e^{+st}dt (mind the + sign) is analytic on the subset \{z\in\mathbb{C}|\Re(z)<1\} by definition of \sigma (and because of the change of sign) and the radius of convergence at 0 is exactly 1. Let us write G(s)=\sum_{n=0}a_n s^n inside the unit disk. Derivation in the integral gives a_n = \frac{G^{(n)}(0)}{n!}=\int_0^\infty \frac{t^n}{n!}f(t)dt, thus a_n>0. We deduce (by the above mentioned result) that 1 is a singularity for G, hence -1(=\sigma) is a singularity for F.


    As a conclusion, let me say a word about the (nice) proof of the result on power series. Take R=1 for simplicity. We procede by contradiction. Suppose that F extends to a disk centered at 1. It is visibly obvious that the new domain (composed of the unit disk and this small disk around 1) contains a closed disk centered at 1/2 and of radius \rho>1/2. Thus f is expandable in power series in this disk, hence if we choose 1/2<r<\rho, the series \sum_{n=0}^\infty \frac{f^{(n)}(1/2)}{n!}r^n converges toward \widetilde{f}\left(\frac{1}{2}+r\right) (where we're dealing with the extension \widetilde{f} of f). Now comes the nice part. The above series is \sum_{n=0}^\infty \frac{1}{n!}r^n\sum_{k=n}^\infty a_k k(k-1)\cdots (k-n+1)\left(\frac{1}{2}\right)^{k-n} and, because the coefficients are positive, we are allowed to change the order of summation, which gives \sum_{k=0}^\infty a_k \sum_{n=0}^k \frac{k(k-1)\cdots (k-n+1)}{n!} r^n \left(\frac{1}{2}\right)^{k-n} = \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k (by Newton's binomial formula). Thus, we have proved that the series \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k converges (toward the value \widetilde{f}\left(\frac{1}{2}+r\right)). This contradicts the fact that the radius of convergence is 1, since we could choose r>1/2.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Senior Member bkarpuz's Avatar
    Joined
    Sep 2008
    From
    R
    Posts
    481
    Thanks
    2
    Quote Originally Posted by Laurent View Post
    Hi,

    What do you think of f(t)=\frac{1}{1+t^2}?

    This function is positive, continuous on [0,+\infty), its abscissa of convergence is 0, and yet its Laplace transform at 0 is finite. Thus you don't have \lim_{s\to\sigma}|F(s)|=\infty.

    However, the first part is correct: 0 is a singularity for F, which means that F cannot be extended analytically to a neighbourhood of 0.

    This is very very much reminiscent of the following result. Consider the power series F(z)=\sum_{n=0}^\infty a_n z^n, of radius 0<R<\infty. Assume a_n\geq 0 for all n. Then R is a singularity of F.

    Actually, I noticed that the result about Laplace transforms can even be reduced to this one. Here is how. For simplicity (and wlog), assume \sigma=-1. Then the function G(s)=\int_0^\infty f(t)e^{+st}dt (mind the + sign) is analytic on the subset \{z\in\mathbb{C}|\Re(z)<1\} by definition of \sigma (and because of the change of sign) and the radius of convergence at 0 is exactly 1. Let us write G(s)=\sum_{n=0}a_n s^n inside the unit disk. Derivation in the integral gives a_n = \frac{G^{(n)}(0)}{n!}=\int_0^\infty \frac{t^n}{n!}f(t)dt, thus a_n>0. We deduce (by the above mentioned result) that 1 is a singularity for G, hence -1(=\sigma) is a singularity for F.

    As a conclusion, let me say a word about the (nice) proof of the result on power series. Take R=1 for simplicity. We procede by contradiction. Suppose that F extends to a disk centered at 1. It is visibly obvious that the new domain (composed of the unit disk and this small disk around 1) contains a closed disk centered at 1/2 and of radius \rho>1/2. Thus f is expandable in power series in this disk, hence if we choose 1/2<r<\rho, the series \sum_{n=0}^\infty \frac{f^{(n)}(1/2)}{n!}r^n converges toward \widetilde{f}\left(\frac{1}{2}+r\right) (where we're dealing with the extension \widetilde{f} of f). Now comes the nice part. The above series is \sum_{n=0}^\infty \frac{1}{n!}r^n\sum_{k=n}^\infty a_k k(k-1)\cdots (k-n+1)\left(\frac{1}{2}\right)^{k-n} and, because the coefficients are positive, we are allowed to change the order of summation, which gives \sum_{k=0}^\infty a_k \sum_{n=0}^k \frac{k(k-1)\cdots (k-n+1)}{n!} r^n \left(\frac{1}{2}\right)^{k-n} = \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k (by Newton's binomial formula). Thus, we have proved that the series \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k converges (toward the value \widetilde{f}\left(\frac{1}{2}+r\right)). This contradicts the fact that the radius of convergence is 1, since we could choose r>1/2.
    Laurent, ty again for your replies.
    I give a link below, that you will see the mentioned lemma above is used therein.
    Gyori, Ladas and Pakula, Oscillation theorems for delay differential equations via Laplace transforms, Canad. Math. Bull., vol. 33, no. 3, pp. 323--326, (1990).
    However, they don't talk about the \lim situation here, but in their book they do.

    May be they wanted to say \lim\nolimits_{\stackrel{{\mathcal Re}(s)<\sigma_{f}}{s\to\sigma_{f}}}|F(s)|=\infty.

    Thanks.
    Last edited by bkarpuz; September 9th 2009 at 10:26 PM.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    Senior Member bkarpuz's Avatar
    Joined
    Sep 2008
    From
    R
    Posts
    481
    Thanks
    2

    Exclamation

    By the way, when we define
    F(s):=\int_{0}^{\infty}f(t)\mathrm{e}^{-st}\mathrm{d}t,
    since we will approach to the absissa of covnergence s=\sigma_{f} from the left, the power series will have the form
    F(s)=\sum_{k\in\mathbb{N}_{0}}\bigg(\frac{(-1)^{k}}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e}^{-\sigma_{f}t}\mathrm{d}t\bigg)(s-\sigma_{f})^{k}
    {\color{white}{F(s)}}=\sum_{k\in\mathbb{N}_{0}}\bi  gg(\frac{1}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e  }^{-\sigma_{f}t}\mathrm{d}t\bigg)(\sigma_{f}-s)^{k}
    for s<\sigma_{f}, where the coefficients are positive.
    Then, the rest follows from what you have mentioned for power series, right?
    Follow Math Help Forum on Facebook and Google+

  5. #5
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by bkarpuz View Post
    Consider again X(s)=\int_0^\infty \frac{e^{-st}}{1+t^2}dt. We have \sigma_0=0. And we have a uniform bound: for any s\in\mathbb{C} such that \Re(s)\geq 0, we have \left|\frac{e^{-st}}{1+t^2}\right|=\frac{e^{-\Re(st)}}{1+t^2}\leq \frac{1}{1+t^2}. Thus, |X(s)|\leq \int_0^\infty\frac{dt}{1+t^2}<\infty is bounded on this half-plane!

    I can't see my mistake; this would mean that this more precise conclusion is again wrong (?)...

    If X(\alpha_0)=+\infty, this is however correct, and simple. It suffices to consider limits along the real axis. Indeed, s\mapsto X(s) decreases, therefore either it diverges or it is bounded toward \sigma_0. Assume by contradiction that X(s)\leq M for all s>\sigma_0. Then, for all A>0, for all s>\sigma_0, \int_0^A f(t)e^{-st}dt\leq M. For a fixed A, we can take the limit s\to\sigma_0, and get: \int_0^A f(t)e^{-\sigma_0 t}dt\leq M. Therefore, \int_0^\infty f(t) e^{-\sigma_0 t} dt\leq M <\infty, in contradiction with X(\alpha_0)=+\infty.

    Quote Originally Posted by bkarpuz View Post
    By the way, when we define
    F(s):=\int_{0}^{\infty}f(t)\mathrm{e}^{-st}\mathrm{d}t,
    since we will approach to the absissa of covnergence s=\sigma_{f} from the right, the power series will have the form
    F(s)=\sum_{k\in\mathbb{N}_{0}}\bigg(\frac{(-1)^{k}}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e}^{-\sigma_{f}t}\mathrm{d}t\bigg)(s-\sigma_{f})^{k}
    {\color{white}{F(s)}}=\sum_{k\in\mathbb{N}_{0}}\bi  gg(\frac{1}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e  }^{-\sigma_{f}t}\mathrm{d}t\bigg)(\sigma_{f}-s)^{k}
    for s<\sigma_{f}, where the coefficients are positive.
    Then, the rest follows from what you have mentioned for power series, right?
    This is not correct, since we don't know a priori that we can expand in power series at \sigma_f (and a posteriori we can't). In order to reduce to what I did for \sigma=-1, you can look at F(\sigma+1+s), i.e. change f into f(t)e^{-(\sigma+1)t}. Then the abcissa is -1.

    Or, plainly, it corresponds to expanding in power series at \sigma+1, where the radius of convergence is 1. Inside the circle of center \sigma+1 and radius 1 (and only there), F(s)=\sum_{k\in\mathbb{N}_{0}}\bigg(\frac{(-1)^{k}}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e}^{-(\sigma_{f}+1)t}\mathrm{d}t\bigg)(s-(\sigma_{f}+1))^{k} and you continue the same: I want the singularity on the right of the circle, hence the change of sign, which amounts here to changing s-(\sigma_{f}+1) into the opposite, as you did.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Senior Member bkarpuz's Avatar
    Joined
    Sep 2008
    From
    R
    Posts
    481
    Thanks
    2
    Quote Originally Posted by Laurent View Post
    Consider again X(s)=\int_0^\infty \frac{e^{-st}}{1+t^2}dt. We have \sigma_0=0. And we have a uniform bound: for any s\in\mathbb{C} such that \Re(s)\geq 0, we have \left|\frac{e^{-st}}{1+t^2}\right|=\frac{e^{-\Re(st)}}{1+t^2}\leq \frac{1}{1+t^2}. Thus, |X(s)|\leq \int_0^\infty\frac{dt}{1+t^2}<\infty is bounded on this half-plane!

    I can't see my mistake; this would mean that this more precise conclusion is again wrong (?)...

    If X(\alpha_0)=+\infty, this is however correct, and simple. It suffices to consider limits along the real axis. Indeed, s\mapsto X(s) decreases, therefore either it diverges or it is bounded toward \sigma_0. Assume by contradiction that X(s)\leq M for all s>\sigma_0. Then, for all A>0, for all s>\sigma_0, \int_0^A f(t)e^{-st}dt\leq M. For a fixed A, we can take the limit s\to\sigma_0, and get: \int_0^A f(t)e^{-\sigma_0 t}dt\leq M. Therefore, \int_0^\infty f(t) e^{-\sigma_0 t} dt\leq M <\infty, in contradiction with X(\alpha_0)=+\infty.


    This is not correct, since we don't know a priori that we can expand in power series at \sigma_f (and a posteriori we can't). In order to reduce to what I did for \sigma=-1, you can look at F(\sigma+1+s), i.e. change f into f(t)e^{-(\sigma+1)t}. Then the abcissa is -1.

    Or, plainly, it corresponds to expanding in power series at \sigma+1, where the radius of convergence is 1. Inside the circle of center \sigma+1 and radius 1 (and only there), F(s)=\sum_{k\in\mathbb{N}_{0}}\bigg(\frac{(-1)^{k}}{k!}\int_{0}^{\infty}f(t)t^{k}\mathrm{e}^{-(\sigma_{f}+1)t}\mathrm{d}t\bigg)(s-(\sigma_{f}+1))^{k} and you continue the same: I want the singularity on the right of the circle, hence the change of sign, which amounts here to changing s-(\sigma_{f}+1) into the opposite, as you did.
    Okay, I now got it well.
    We may expand F to Taylor series at \sigma_{f}+\varepsilon for some arbitrarily fixed \varepsilon>0 and check F at \sigma_{f}-\delta for any \delta>0.
    By the way, I did not imply that you do anything wrong, I just wanted to show you the references.
    Thanks for the discussion.
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Senior Member bkarpuz's Avatar
    Joined
    Sep 2008
    From
    R
    Posts
    481
    Thanks
    2

    Exclamation

    Quote Originally Posted by Laurent View Post
    ...
    Now comes the nice part. The above series is \sum_{n=0}^\infty \frac{1}{n!}r^n\sum_{k=n}^\infty a_k k(k-1)\cdots (k-n+1)\left(\frac{1}{2}\right)^{k-n} and, because the coefficients are positive, we are allowed to change the order of summation, which gives \sum_{k=0}^\infty a_k \sum_{n=0}^k \frac{k(k-1)\cdots (k-n+1)}{n!} r^n \left(\frac{1}{2}\right)^{k-n} = \sum_{k=0}^\infty a_k \left(r+\frac{1}{2}\right)^k (by Newton's binomial formula).
    ...
    Hi Laurent again,

    What is the trick here that positive coefficients do?
    Just allows the reversal of the summations?
    Follow Math Help Forum on Facebook and Google+

  8. #8
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by bkarpuz View Post
    Hi Laurent again,

    What is the trick here that positive coefficients do?
    Just allows the reversal of the summations?
    Yes, that's exactly it. It is sometimes called "Fubini-Tonelli theorem": in a double series (or integral), the order of summation doesn't matter if the coefficients are nonnegative.

    Since this is the crucial step, this proof highlights the importance of always checking whether we are allowed to interchange two sums!
    Follow Math Help Forum on Facebook and Google+

  9. #9
    Senior Member bkarpuz's Avatar
    Joined
    Sep 2008
    From
    R
    Posts
    481
    Thanks
    2

    Cool Proofs

    Singularity (at right end point) of power series involving positive coefficients.

    Assume that the power series \textstyle\sum\nolimits_{k\in\mathbb{N}_{0}}a_{k}x  ^{k} converges for all x\in(a,b) with b>0>a such that it is no further possible to extend the region of convergence to (a,c) for any c\in(b,\infty), and that the sequence \{a_{k}\}_{k\in\mathbb{N}_{0}} be a nonnegative sequence of reals. We shall prove that f(x):=\textstyle\sum\nolimits_{k\in\mathbb{N}_{0}}  a_{k}x^{k} is not analytic at x=b. To achieve a contradiction suppose not, then f analytic at some 2\delta ( \delta>0 and b-2\delta>0) neighborhood of b, i.e., f can be differentiable infinitely many times at b-\delta. Hence, we have
    f(b+\delta)=\sum_{l\in\mathbb{N}_{0}}\frac{f^{(l)}  (b-\delta)}{l!}\big((b+\delta)-(b-\delta)\big)^{l}=\sum_{l\in\mathbb{N}_{0}}\frac{1}  {l!}\bigg(\sum_{k=l}^{\infty}a_{k}k^{(l)}b-\delta)^{k-l}\bigg)(2\delta)^{l}
    ............ =\sum_{k\in\mathbb{N}_{0}}a_{k}\bigg(\sum_{l=0}^{k  }\frac{1}{l!}k^{(l)}(b-\delta)^{k-l}(2\delta)^{l}\bigg)=\sum_{k\in\mathbb{N}_{0}}a_{  k}\bigg(\sum_{l=0}^{k}\binom{k}{l}(b-\delta)^{k-l}(2\delta)^{l}\bigg)
    ............ =\sum_{k\in\mathbb{N}_{0}}a_{k}(b+\delta)^{k},
    where the factorial function defined by k^{(l)}:=k(k-1)\cdots(k-l+1) is used in the first line, which implies that the series f converges at b+2\delta>0. This is a contradiction to the definition of b (being the radius of convergence), and thus f must be singular at b. Since the terms in the sums above are nonnegative reversal of the order of the sums are allowed due to Fubini-Tonelli theorem (See Fubini's theorem - Wikipedia, the free encyclopedia).

    Example. Consider
    \sum_{k\in\mathbb{N}_{0}}x^{2k}=\frac{1}{1-x^{2}} for x\in(-1,1).


    Singularity (at left end point) of power series involving sign alternating coefficients.

    Assume that the power series \textstyle\sum\nolimits_{k=0}^{\infty}a_{k}x^{k} converges for all x\in(a,b) with a<0<b. Also assume that a can not be replaced by a smaller one, and that the sequence \{a_{k}\}_{k\in\mathbb{N}_{0}} satisfies (-1)^{k}a_{k}\geq0 for all k\in\mathbb{N}_{0}. We shall prove that f(x):=\textstyle\sum\nolimits_{k\in\mathbb{N}_{0}}  a_{k}x^{k} is not analytic at x=a. For the sake of contradiction suppose not, then f analytic at some 2\delta ( \delta>0 with a+2\delta<0) neighborhood of a, i.e., f has derivatives of all order at a+\delta. Hence, we have
    f(a-\delta)=\sum_{l\in\mathbb{N}_{0}}\frac{f^{(l)}(a+\  delta)}{l!}((a-\delta)-(a+\delta))^{l} =\sum_{l\in\mathbb{N}_{0}}\frac{1}{l!}\bigg(\sum_{  k=l}^{\infty}a_{k}k^{(l)}(a+\delta)^{k-l}\bigg)(-2\delta)^{l}
    ................ =\sum_{l\in\mathbb{N}_{0}}\frac{1}{l!}\bigg(\sum_{  k=l}^{\infty}(-1)^{k}a_{k}k^{(l)}(-(a+\delta))^{k-l}\bigg)(2\delta)^{l} =\sum_{k\in\mathbb{N}_{0}}(-1)^{k}a_{k}\bigg(\sum_{l=0}^{k}\frac{1}{l!}k^{(l)}  (-(a+\delta))^{k-l}(2\delta)^{l}\bigg)
    ................ =\sum_{k\in\mathbb{N}_{0}}(-1)^{k}a_{k}\bigg(\sum_{l=0}^{k}\binom{k}{l}(-(a+\delta))^{k-l}(2\delta)^{l}\bigg) =\sum_{k\in\mathbb{N}_{0}}(-1)^{k}a_{k}(\delta-a)^{k}
    ................ =\sum_{k\in\mathbb{N}_{0}}a_{k}(a-\delta)^{k},
    which implies that the series f converges at a-\delta<a. This is also a contradiction. It should also be noticed that in the second line above that the terms of the sums are nonnegative.

    Example. Consider
    \sum_{k\in\mathbb{N}_{0}}(-1)^{k}x^{k}=\frac{1}{1+x} for x\in(-1,1).


    Singularity (at the abscissa of convergence) of the Laplace transform of a nonnegative function.

    We are now back to our original problem. Let f\in\mathrm{C}(\mathbb{R}_{0}^{+},\mathbb{R}_{0}^{  +}) and F be the Laplace transform of f with a finite abscissa of convergence \sigma_{f}, i.e, \sigma_{f}:=\inf\{s\in\mathbb{R}:F(s)\ \text{exists}\}. Then, we shall show that F is not analytic at \sigma_{f}. If not, there exists \delta>0 such that F is analytic in (\sigma_{f}-4\delta,\sigma_{f}+4\delta). Setting G_{\delta}(s):=F(s+\sigma_{f}+\delta) for s\in\mathbb{R}, we learn that G_{\delta} is analytic in (-3\delta,3\delta) and
    (-1)^{k}G_{\delta}^{(k)}(s)=\int_{0}^{\infty}t^{k}f(  t)\mathrm{e}^{-(\sigma_{f}+\delta)t}\mathrm{e}^{-st}\mathrm{d}t\geq0 for all s\in(-3\delta,3\delta) and all k\in\mathbb{N}_{0}.
    Clearly, we also have
    G_{\delta}(s)=\sum_{k\in\mathbb{N}_{0}}\frac{G_{\d  elta}^{k}(0)}{k!}s^{k} for all s\in(-3\delta,3\delta),
    which is a power series with sign alternating coefficients. Following similar arguments to that of discussion in the previous subsection, we obtain that the power series converges G_{\delta} converges at -2\delta, i.e., F(\sigma_{f}-\delta) exists. We are led to a contradiction again. The claim is hence proved.
    Note. {\color{white}{.}}^{1} We could also let here G(s):=F(\sigma_{f}+\delta-s) for s\in\mathbb{R} to use the arguments of in the first section.

    Acknowledgements. Thanks to Laurent for his fruitful discussion about the subject.



    __________________________________________________
    {\color{white}{.}}^{1} This note is due to Laurent.
    Last edited by bkarpuz; September 13th 2009 at 11:06 PM. Reason: To be updated.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Need help for Laplace transform
    Posted in the Calculus Forum
    Replies: 4
    Last Post: December 16th 2011, 01:13 AM
  2. Laplace Transform
    Posted in the Differential Equations Forum
    Replies: 4
    Last Post: October 3rd 2011, 02:10 PM
  3. Laplace transform and Fourier transform what is the different?
    Posted in the Advanced Applied Math Forum
    Replies: 8
    Last Post: December 29th 2010, 10:51 PM
  4. Laplace transform
    Posted in the Differential Equations Forum
    Replies: 2
    Last Post: November 7th 2010, 10:16 AM
  5. Laplace Transform
    Posted in the Advanced Applied Math Forum
    Replies: 1
    Last Post: October 25th 2010, 11:05 AM

Search Tags


/mathhelpforum @mathhelpforum