# Contour Integration

• Dec 23rd 2007, 08:36 PM
ThePerfectHacker
Contour Integration
This tutorial will be on the method of contour integration. In order to really appreciate this, one needs to study complex analysis. Since I wish to make this understandable to as many people as possible I will barely use any complex analysis. However, eventhough this is kept as simple as possible the reader needs to have some experience in math. The following are a must:
1. know what complex numbers, do arithmetic with them, and geometrically represent them in the plane
2. know basic Calculus such as differenciation and integration;
3. know the basics of infinite series, such as computing radius of convergence.

The first thing we need to study are sequences and series of complex numbers. It is the exact same concept as with real sequences. A sequence $c_1,c_2,c_3,...$ of complex numbers is a function on the positive integers $\{ c_n \}$. We say $\lim ~ c_n = c$ when $c_n$ converges to the complex number $c$. Note, any complex number $z$ can be expressed in the form $x+iy$. Thus, we can think of $c_n = a_n + i b_n$ where $a_n$ and $b_n$ are real sequences. This simplifies the problem of studing complex sequences into real sequences.
In fact, we have a simple theorem.

Theorem 1: Let $c_n = a_n + ib_n$ the sequence converges to $c=a+bi$ if and only if $\lim ~ a_n = a$ and $\lim ~ b_n = b$.

Example 1: Consider $c_n = \frac{i^n}{n}$. Write out the first terms to see what is going on $\frac{i}{1}, \frac{-1}{2}, \frac{-i}{3}, \frac{1}{4}, ...$. We see the pattern that $a_n = 0$ if $n$ is odd and $a_n = \frac{(-1)^{n/2}}{n}$ if $n$ is even. Thus, the first few terms for $a_n = 0, -\frac{1}{2}, \frac{1}{4},...$. Similarly, the first few terms for $b_n$ are $1,-\frac{1}{3},\frac{1}{5},...$. Note, $\lim ~ a_n = \lim ~ b_n = 0$. Thus, $\lim ~ c_n = 0 + 0i = 0$.

Now we get to another important concept, series. It is assumed that the reader knows some series from Calculus, otherwise this will be hard. Let $c_1,c_2,...$ be a complex-valued sequence. Define $s_n = \sum_{k=1}^n c_k$. Thus, $s_1 = c_1, s_2 = c_1+c_2, s_3 = c_1+c_2+c_3, ...$. This is called the sequence of partial sums. We say $\sum_{n=1}^{\infty} c_n = c$ for a complex number $c$ iff $\lim ~ s_n = c$. We say $\{ c_n \}$ is a convergent series in that case. Otherwise, we say divergent.

In what follows the absolute value $|z|$ of a complex number $z=x+iy$ is defined to be $\sqrt{x^2+y^2}$. Thus, the absolute value of a complex number is always a non-negative real number.

Example 2: Just like in Calculus we have an analouge of a geometric series. Let $|z| < 1$. Let us prove that $\sum_{n=0}^{\infty} z^n$ is convergent, and furthermore find its sum. Note that, $1+z+z^2+...+z^n = \frac{1-z^n}{1-z}$. When we take the limit it turns out that $\lim ~ z^n = 0$ (when $|z|<1$). But the problem with proving this is that there is no nice way of writing $(x+iy)^n$, if we want to seperate the real and imaginary parts like in Example 1. So we will prove this in a nice way latter on. But if we accept $\lim ~ |z|^n = 0$ then it means $\lim ~ \frac{1-z^n}{1-z} = \frac{1}{1-z}$. Thus, $\sum_{n=0}^{\infty}z^n = \frac{1}{1-z}$.

In Calculus we study power series. It is a function $f(x) = \sum_{n=0}^{\infty} a_n x^n$, the domain of the function are all values $x$ where it converges. The question is of course how to find where it converges. We usually use the ratio test, by writing, $\lim~ \left| \frac{a_{n+1}x^{n+1}}{a_n x^n} \right| = \lim ~ \left| \frac{a_{n+1}}{a_n} \right||x| < 1$. It needs to be less than 1 be have (absolute) convergence. When it is larger than 1 it diverges. Note, if the limit fails to exists (not even infinity) then the test fails. Or when sum of the $a_n$ are zero the test fails also because then we divide by zero. There is also the ratio test by taking the limit $\lim ~ |a_nx^n |^{1/n} = \lim ~ |a_n|^{1/n} |x| < 1$, again if the limit is larger than 1 then it diverges. When working with the root test an important limit to know is that $\lim~ n^{1/n} = 1$. In complex analysis, the exact same thing happens. In fact, you can write out the summation in terms of real and imaginaries to prove this.

Example 3: Find the radius of convergence of $\sum_{n=0}^{\infty} z^n$. Let us use the root test and we get $\lim ~ |z^n|^{1/n} = \lim ~ |z| = |z|$. Thus, if $|z| < 1$ we have convergence. If $|z| > 1$ we have divergence. But just like in Calculus we need to check the endpoints, but unlike in Calculus where there are just two points here there are infinitely many! All lieing on $|z| = 1$, i.e. the circle $x^2 + y^2 = 1$. This problem turns out being not as simple, so we will simply ignore it because we will not need it (in case you are interested apply the divergence test and claim the series will diverge at $|z|=1$). However, the radius of convergence is $R=1$.

Note in complex analysis the term radius of convergence makes more sense than in Calculus. Because the radius of convergence is the circle on which the power series converges. Remember we say $R=0$ if the power series does not convergence anywhere, which is not interesting and we will never have that. And $R=\infty$ if the power series converges everywhere in complex plane, i.e. a circle of infinite radius of convergence.

Example 4: Consider $f(z) = \sum_{n=0}^{\infty}\frac{z^n}{n!}$. Here it is easier to use ratio test $\lim ~ \left| \frac{z^{n+1}}{(n+1)!}\cdot \frac{n!}{z^n} \right| = \lim~ \frac{|z|}{n+1} = 0$ because $n+1 \to \infty$ while $|z|$ stays fixed. Thus for any $z$ in the complex plane the ratio limit is strictly less than 1, thus we have convergence. Since this converges everywhere we have $R=\infty$.

Example 5: Consider $f(z) = \sum_{n=0}^{\infty} \frac{2^nz^{2n}}{n^{100}}$. Here is easier to use root test. $\lim ~ \left| \frac{2^nz^{2n}}{n^{100}} \right|^{1/n} = \lim ~ \left| \frac{2z^2}{(n^{1/n})^{100}} \right|= 2|z|^2 < 1$. Note, we do not have $|z|^2 = z^2$, that is clearly wrong since $z$ is a complex variable so the square can be negative. Thus, $|z|^2 < \frac{1}{2} \implies -\frac{1}{\sqrt{2}} < |z| < \frac{1}{\sqrt{2}}$. Since $|z|\geq 0$ it is equivalent to simply writing $|z| < \frac{1}{\sqrt{2}}$. Thus, the radius of convergence is $R=\frac{1}{\sqrt{2}}$.
• Dec 23rd 2007, 08:49 PM
ThePerfectHacker
We will now explore the notion of differenciation in the complex plane. First a complex-valued function is just like a real-valued function except the input values can be complex numbers and the output values can be complex numbers. Thus for example $f(z) = z^2$ is a complex-valued function. Note if we write $z=x+iy$ then we can think of $f(z)$ as $f(x+iy) = (x+iy)^2 = (x^2-y^2)+2xyi$. Similarly any complex-valued function in general can be written as $u(x,y)+iv(x,y)$ for some real-valued functions $u(x,y),v(x,y)$. We refer to $u(x,y)$ as the real part of $f(z)$ and sometimes write $\Re f(z)$ and $v(x,y)$ as the imaginary part of $f(z)$ and sometimes write $\Im f(z)$. However, there is a minor problem. Unlike with single or double variable functions which we can graph either by a 2 dimensional or 3 dimensional graph we cannot graph complex-valued functions. Because it has the form $u(x,y)+iv(x,y)$ so it is a function of two variables and the output is two variables (the real and imaginary part). But that is not a serious problem, we can sometimes graph the real and imaginary parts independently if we really need to.

Let $f(z)$ be a complex-valued function defined around some point $c$. We say $f(z)$ is differenciable at $c$ when $\lim_{x\to c}\frac{f(z)-f(c)}{z-c}$ exists, or if you perfer $\lim_{h\to 0}\frac{f(c+h)-f(c)}{h}$. And denote the limit by $f'(c)$. Note, here the limit is the limit as we approach the point for all possible paths, unlike in ordinary Calculus which simply means left and right handed this limit has infinitely many different approach and they all need to exist (think of this as a two-variable limit in Calculus 3 if you want to, we are approaching the point from all possible directions). This is why complex-valued that can be differenciated are so much better behaved then in Calculus.

Definition 1: Let $f(z)$ be differenciable at a point $c$ we say that $f(z)$ is analytic at $c$.

I will use the term analytic instead of differenciable to make a distinction between complex analysis derivative and Calculus derivative.

Example 6: Consider $f(z) = z^2$ let $c$ be any point. Now $\lim_{z\to c}\frac{f(z) - f(c)}{z-c} =\lim_{z\to a}\frac{z^2 - c^2}{z-c} = \lim_{z\to c}z+c=2c$. Thus, $f'(z) = 2z$ just like expected.

It can be show easily that polynomials can be differenciated in the same way as the Power Rule for derivatives. In fact, all the properties of derivatives: sum, difference, product, quotient, and chain rules are all preserved. So basically everything is the same except of $x$ we have $z$.

Now we reach a very important theorem whose proof we will omit (again this tutorial is made as simple as possible, writing proofs will make it harder). Before going into the theorem there are something called open sets. An open set is like an open interval $(a,b)$ something that does not contain its boundary (or its endpoints). For example, $|z| < 1$ is the disk $x^2+y^2 < 1$ which is open because its boundary $x^2+y^2 = 1$ is not contained. A closed set is for example $|z| \leq 1$ which is the disk $x^2+y^2 \leq 1$ which does contain its boundary. When we talk about differenciation we talk about it on open sets because it makes no sense to talk about talking the quotient limit on the boundary since the limit needs to exists from all sides. Just like we never talk about a differenciation on a closed interval $[a,b]$ since it makes no sense to talk about the derivative at $a$ because the limit from the left does not exist. Therefore, differenciation of complex-valued functions is always done on open sets. There is just a problem, I did not define mathematically what open sets actually are, I am not going to do that because that requires topology again something which will make this tutorial more difficult. Thus, simply think of open sets as sets in the complex plane which does not contain their boundary. If these open set notion seems confusing I will try to avoid it as much as possible and instead all theorems and computations are going to be done on open disks $|z| < R$ (for $R>0$) centered at $(0,0)$ of radius $R$.

Theorem 2: Suppose $f(z) = \sum_{n=0}^{\infty}a_n z^n$ has radius of convergence $R > 0$ then on the open disk $|z| < R$ the function $f(z)$ is analytic and its derivative is given by $f'(z) = \sum_{n=1}^{\infty}na_nz^{n-1}$.

We define $e^z = \sum_{n=0}^{\infty} \frac{z^n}{n!}$, $\sin z = \sum_{n=0}^{\infty} \frac{(-1)^nz^{2n+1}}{(2n+1)!}$, $\cos z = \sum_{n=0}^{\infty} \frac{(-1)^n z^{2n}}{(2n)!}$.

We can show $e^z,\sin z,\cos z$ have radius of convergence $\infty$ and by Theorem 2 the derivative of $e^z$ is itself, the derivative of $\sin z$ is $\cos z$ and $\cos z$ differenciated is $- \sin z$.

The next identity is very important. I am not going to derive it because I probably already seen people do it at least 10 times in person, and it would be really boring for me to do it again. It is one of those things people like to derive. It is easy if you expand everything in infinite series.

Euler Identity: For any complex number $z$ we have $e^{iz} = \cos z + i\sin z$.

Example 7: We can find that $e^{\pi i} = \cos \pi + i \sin pi = -1$. In fact $e^{\pi i + 2\pi k i} = -1$ for any integer $k$. This shows that the equation $e^z = -1$ has infinite many solutions in the complex plane. This also shows there is no such thing as a logarithm because the exponential function is not one-to-one. (There are restricted logarithms functions but we will not worry about them).

There are two more important functions defined by power series: $\sinh z = \sum_{n=0}^{\infty} \frac{z^{2n+1}}{(2n+1)!}$ and $\cosh z = \sum_{n=0}^{\infty} \frac{z^{2n}}{(2n)!}$. These are analytic everywhere too and $(\sinh z)' = \cosh z ~ (\cosh z)' = (\sinh z)$. They are basically sine and cosine without the alternation of signs. They are called hyperbolic sine and hyperbolic cosine. Note $\sinh z + \cosh z = e^z$. And it can also be easily shown that $\frac{1}{2}(e^z - e^{-z}) = \sinh z$ and $\frac{1}{2}(e^z + e^{-z}) = \cosh z$.

We have some properties that should be familar.

Theorem 3: Let $a,b$ be complex numbers and $n$ an integer:
1) $e^a\cdot e^b = e^{a+b}$
2) $\sin (a\pm b) = \sin a \cos b \pm \cos a \sin b$
3) $\cos (a\pm b) = \cos a \cos b \mp \sin a \sin b$
4) $\left( e^a \right)^n = e^{na}$
5) $\sinh (a\pm b) = \sinh a \cosh b \pm \cosh a \sinh b$
6) $\cosh (a\pm b) = \cosh a\cosh b \pm \sinh a \sinh b$

Warning! The rule $(e^a)^b = e^{ab}$ no longer works.

Note $e^{iz} = \cos z + i\sin z$ and $e^{-iz} = \cos z - i\sin z$. Thus, $\frac{1}{2}(e^{iz} + e^{-iz}) = \cos z$ and $\frac{1}{2}(e^{iz} - e^{-iz}) = i\sin z$. Or more elegantly in hyperbolic functions $\cosh (iz) = \cos z$ and $\sinh (iz) = i\sin z$. Make the substitution $z=iw$ to get $\cosh (-w) = \cos (iw)$ and $\sinh (-w) = i\sin (iw)$. Now $\cosh (-w) = \cosh w$ thus $\cos (iw) = \cosh w$. And $\sinh (-w) = - \sinh w$ thus $i \sin (iw) = -\sinh w$ multiply both sides by $-i$ to get $\sin (iw) = i\sinh w$. This gives us a simple theorem.

Theorem 4: For any complex number $z$ we have:
1) $\sinh (iz) = i\sin z$
2) $\cosh (iz) = \cos z$
3) $\sin (iz) = i \sinh z$
4) $\cosh (iz) = i\cosh z$.

The signifigance of Theorem 4 is for computational work.

Example 8: Let us compute $\sin (x+iy)$ for arbitrary $x,y$. We can use Theorem 3 property 3 to obtain $\sin x \cos (iy) + \cos x \sin (iy)$. Now using Theorem 4 property 3 we get $\sin x \cosh y + i \cos x \sinh y$. The reason why this is easier is because now everything is real valued.

Example 9: We will solve the equation $\sin z = i$. Think of $z=x+iy$ and by previous example $\sin x \cosh y + i \cos x\sinh y = 0 + i$ it means $\sin x\cosh y = 0$ and $\cos x\sinh y = 1$. Look at the first equation, since $\cosh y \not = 0$ it means $\sin x = 0$ and so $x=\pi k$. But then that means $\cos \pi k \sinh y = 1\implies (-1)^k \sinh y = 1\implies \sinh y = (-1)^k$. To solve this equation write $\frac{1}{2}(e^y - e^{-y}) = (-1)^k$ multiply both sides by $2e^y$ to get $e^{2y} - 1 = 2(-1)^ke^y \implies e^{2y} - 2(-1)^k e^y - 1 =0$. You can use quadradic formula, $e^y = \frac{2(-1)^k \pm \sqrt{8}}{2} = (-1)^k \pm \sqrt{2}$ it cannot be the negative sign otherwise we have a negative on RHS so it must be a plus, thus, $e^y = (-1)^k + \sqrt{2} \implies y = \ln (\sqrt{2} + (-1)^k)$. Which means all the solutions are: $z = \pi k + i \ln (\sqrt{2}+(-1)^k)$ for $k \in \mathbb{Z}$.

Thus, now we know how to differenciate polynomials, exponentials, trigonometric, and hyperbolic functions. That is all we will need for further.
• Dec 23rd 2007, 08:56 PM
ThePerfectHacker
Now we turn to the next issue, integration. In the complex plane integration is much more interesting because we integrate over curves in the plane. Before going into integration let us first discuss what a curve is.

Definition 2: Let $f:[a,b]\mapsto \mathbb{C}$ be a continous function. The image $f([a,b]) = \{ f(x) | x \in [a,b] \}$ is a curve.

First let us understand the notation, $[a,b]$ stands for a closed interval with endpoints $a,b$, $\mathbb{C}$ represents the complex numbers. And $[a,b]\mapsto \mathbb{C}$ means the function $f(t)$ maps the interval $[a,b]$ into a complex number. The reason why we make $f(t)$ continous is so that the curve stays all in a single peice.

Example 10: Consider $f(t) = it$ on the interval $[0,1]$. Then the curve is the set $\{ f(t) | 0\leq t\leq 1\} = \{ it | 0\leq t\leq 1\}$ which is a line segment having its endpoints on $(0,0)$ and $(0,1)$.

Example 11: Consider $f(t) = t+it^2$ on $[-1,1]$. Then the curve is a parabola $y=x^2$ starting at $x=-1$ and ending at $x=1$.

Example 12: A very important curve is $f(t) = e^{it}$ on $[0,2\pi]$ this is the unit circle. Because $f(t) = \cos t + i\sin t$ and when $t$ moves from $t=0$ to $t=2\pi$ it traces out the unit circle. Think of this as polar coordinates.

In Example 10 and 11 the curves were not closed, while in Example 12 the curve was closed. Here is the percise definition.

Definition 3: A curve is closed when $f(a) = f(b)$.

Thus, the definition is basically saying if the starting point and endpoint points are the same then curve returns back to where it started. But a very important type of curve for us is the following.

Defintion 4: A curve is a simple closed curve when $f(x) \not = f(y)$ for $x\not = y$ unless $x,y$ are $a,b$.

In simple terms Definition 4 is saying a curve is a simple closed curve if it does not have any self intersections (except the endpoints). Thus, for example the infinity symbol $\infty$ is a closed curve but it is not a simple closed curve because it intersects itself, however a circle is a simple closed curve.

All the standard curve that we can think of are also differenciable.

Definition 5: A curve $f:[a,b]\mapsto \mathbb{C}$ is smooth is it is differenciable on $(a,b)$ (here "differenciable" means the following: we can think of $f(t)$ as $g(t)+ih(t)$, so that $g(t),h(t)$ are differenciable on $(a,b)$).

Example 13: Consider the following curve. Draw a straight line from $-1+0i$ to $1+0i$. Then draw a semi-circle returning back to $-1$. This curve is not smooth because it fails to be differenciable at $\pm 1$ (remember sharp curves imply a curve is not differenciable). However, the curve is piecewise smooth. Meaning we can break it up into finitely many peices so that each peice is smooth.

Example 13 illustrates the next definition.

Definition 6: A curve $f:[a,b]\mapsto \mathbb{C}$ is piecewise smooth when we can break the interval $[a,b]$ into $\{ a=x_0 < x_1 < ... < x_{n-1} < x_n =b\}$ so that $f$ is a smooth curve on $[x_{k-1},x_k]$ for $1\leq k \leq n$.

Finally, we need to discuss something called orientation. A simple closed curve can be draw clockwise or counterclockwise. A simple closed curve is said to be positively orientied if is is drawn counterclockwise otherwise it is called negatively orientied, so for instance $f(t) = e^{it}$ for $0\leq t\leq 2\pi$ is positively oriented because as we draw the curve it traces out in a counterclockwise rotation. All curves that we will work in this tutorial will be positively orientied piecewise smooth simple closed curves. Instead of saying all of that we have a special name for that.

Definition 7: A curve is a countor when it is a positively oriented piecewise smooth simple closed curve.
• Dec 25th 2007, 08:29 PM
ThePerfectHacker
Now we are ready to define how integration works for complex numbers.

Definition 8: Suppose $f:[a,b]\mapsto \mathbb{C}$ is a continous function, meaning if we write $f(t) = g(t)+ih(t)$ then each component $g(t),h(t)$ are continous on $[a,b]$ then we define $\int_a^b f(t) dt = \int_a^b g(t) dt + i \int_a^b g(t)dt$.

Thus, basically we integrate the real part of the function and the imaginary part of the function.

Example 14: Let $f(t) = t+it^2$ on the interval $[0,1]$ then by definition, $\int_{0}^1 f(t) dt = \int_{0}^1 tdt + i\int_0^1 t^2 dt = \frac{1}{2} + i \frac{1}{3}$.

That definition is a first step in defining integration along a curve. Suppose $C$ is smooth curve meaning we can parametrize $C$ as $f:[a,b] \mapsto \mathbb{C}$ such that $f(t)$ is differenciable on $(a,b)$ (meaning its real and imaginary components are differenciable). Here is the definition.

Definition 9: Suppose $g:[a,b]\mapsto \mathbb{C}$ is a smooth curve $C$ and $f(z)$ is a continous complex-valued function on $C$ then we define:
$\int_C f(z) dz = \int_a^b f(g(t))g'(t)dt$.
(This definition makes sense because the RHS has the form of Definition 8 which we already defined).

Note, I been using $f(t)$ to represent a parametrization of a curve all up to Definition 9. I had to switch to a different letter because otherwise it would be sloppy since I am using $f(z)$ also.

Example 15: Let $g(t) = it$ for $0\leq t\leq 1$, so this is a curve $C$ which is a straight line from $0+0i$ to $0+1i$. Let $f(z) = z^2$. Then $\int_C f(z) dz = \int_0^1 f(g(t))g'(t)dt = \int_0^1 (it)^2(i)dt = -i\int_0^1 t^2 dt = -\frac{i}{3}$.

Just to mention this point again, we say $g(t)$ is differenciable when its real and imaginary components are differenciable meaning if we write $g(t) = u(t)+iv(t)$ then $g'(t) = u'(t) + iv'(t)$ that is what we did in Example 15 implicitly.

Sometimes we can parametrize the same curve in multiple ways, for example, in Example 15 that line segment $C$ we can parametrize by $h(t) = 2it$ for $0\leq t\leq 1/2$. But it does not matter because the complex integral along this curve will stay that same (in fact try doing the same problem with $h(t)$ and get convinced that it is the same same). However, there is one minor point we need to watch out for. Suppose in Example 15 we parametrized the line as $r(t) = i(1-t)$ for $0\leq t\leq 1$ then the answer would have been the same except it will negative (in this case $i/3$). Thus, what is going on? When we parametrize $C$ by $g(t) = it \mbox{ on }[0,1]$ and $h(t) = 2it \mbox{ on }[0,1/2]$ note that $g(0)=f(0) \mbox{ and }g(1) = h(1/2)$ meaning the starting and ending parts of the two parameterizations are the same. In that case no matter what parametrization we choose the answer we get will be the same. However, if we parametrize the line segment as $f(t) = it \mbox{ on }[0,1]$ and $r(t) = i(1-t) \mbox{ on }[0,1]$ then $f(0) = r(1) \mbox{ and }f(1) = r(0)$ meaning the starting point for $f$ and the ending point for $g$ are the same, they get swapped, thus $g$ parametrizes the exact the same curve but it goes in an opposite direction. And in this cases you will get a negative answer. Thus, when parametrizing curves you need to look out for a possibly of parametrizing the curve opposite of what it is supposed to be.

We can now state a more general definition using Definition 9.

Definition 10: Let $g:[a,b]\mapsto \mathbb{C}$ be a piecewise smooth curve $C$, i.e. we can decompose the interval $[a,b]$ into subintervals $\{a=x_0< x_1< ... < x_n = b\}$ such that the curve is smooth on $[x_{k-1},x_k]$.
And suppose that $f(z)$ is continous on $C$. Then we define $\int_C f(z) dz = \int_{C_1}f(z)dz + ... + \int_{C_n}f(z) dz$ where $C_1$ is the curve on $[x_0,x_1]$, $C_2$ is the curve on $[x_1,x_2]$, ... , and $C_n$ is the curve on $[x_{n-1},x_n]$.

The above definition might look complicated but there is nothing complicated about it. All it says we break on the curve into smooth parts and sum the integrals over each individual part.

Example 16: Consider $C$ which is the line segment from $-1$ to $1$ then transvered counterclockwise back to $-1$ in a unit circle. So $C$ is a semi-circle. Remember depending which orientation we chose we can get a different answer up to a negative factor. Suppose we chose a positively oriented orientation. Let $g:[-1,1]\mapsto \mathbb{C}$ be defined as $g(t) = t$, thus $g(t)$ is the line segment $C_1$. And $h:[0,\pi]\mapsto \mathbb{C}$ as $h(t) = e^{it}$ is the semi-circle $C_2$. For simplicity sake let us chose $f(z) = z$. Then $\int_C zdz = \int_{C_1}zdz + \int_{C_2}zdz = \int_{-1}^1 (t)\cdot (1) dt + \int_{0}^{\pi} (e^{it})\cdot (ie^{it}) dt = 0$.

In Example 16 the curve $C$ was acutally a contour. In fact, most integrals that we will work with are going to be contours. We use a special symbol.

Definition 11: Suppose $\Gamma$ is a contour (that is a piecewise smooth simple closed positively oriented curve). Then we write $\oint_{\Gamma}f(z)dz$ instead to show the significance of integrating over a contour.

Here is a very important contour integeral.

Theorem 5: Let $R>0$ then $\oint_{|z|=R} z^n dz = 0$ for all integers $n\not = -1$. And $\oint_{|z|=R}\frac{dz}{z} = 2\pi i$.

Proof: We are integrating over the circle $|z|=R$ which can be parametrized as $g(t) = e^{it}$ for $0\leq t\leq 2\pi$. Then that means, $\oint_{|z|=R} z^n dz = \int_0^{2\pi} \left( e^{it} \right)^n \left( e^{it} \right)' dt = \int_0^{2\pi} ie^{nit} \cdot e^{it} dt = i \int_0^{2\pi} e^{(n+1)it}dt$
Now using Euler's identity,
$i\left( \int_0^{2\pi} \cos (n+1)t dt + i \int_0^{2\pi}\sin (n+1) dt \right)$
If $n\not = -1$ we can write,
$i\left( \frac{1}{n+1} \sin (n+1)t\bigg|_0^{2\pi} - i \frac{1}{n+1}\cos (n+1)t\bigg|_0^{2\pi} \right) = 0$
Otherwise if $n=-1$ then we have $i\left( \int_0^{2\pi} 1 dt + \int_0^{2\pi} 0dt \right) = 2\pi i$. Q.E.D.
• Dec 27th 2007, 01:42 PM
ThePerfectHacker
In this lecture we mention some properties of the contour integral.

First suppose $f: [a,b]\mapsto \mathbb{C}$ is a continous function, meaning if we write $f(t) = g(t)+ih(t)$ where $g(t),h(t)$ are continous. Now if we want to compute $\int_a^b f(t) dt = \int_a^b g(t)dt+i\int_a^b h(t)dt = [G(b)-G(a)]+i[H(b)-H(a)]$ where $G(t)$ and $H(t)$ are continous anti-derivates of $g(t)$ and $h(t)$ by the first Fundamental Theorem of Calculus. Hence if we define $F(t) = G(t)+iH(t)$ then $\int_a^b f(t)dt = F(b)-F(a)$. Here is the theorem.

Theorem 6: Let $f:[a,b]\mapsto \mathbb{C}$ be a continous function. Let $F(t)$ be a continous function on $[a,b]$ such that $F'(t) = f(t)$ (this is is the componentwise the derivative as explained before) on $(a,b)$ then $\int_a^b f(t)dt = F(b)-F(a)$.

The above theorem will be used to prove a much more useful theorem.

Theorem 7: Let $f(z)$ be a continous complex-valued function on a smooth curve $\Gamma$. Let $g(t)$ be a parametrization of the curve. If $F(z)$ is analytic on an open disk containing $\Gamma$ such that $F'(z) = f(z)$ for all $z$ on $\Gamma$ then $\int_{\Gamma} f(z) dz = F(g(b)) - F(g(a))$.

Proof: The theorem might look hard with all the pre-conditions that need to be satisfied, i.e. continuity, analytic, ... but that is how analysis theorems are get used to them. It is really simple, by definition $\int_{\Gamma}f(z) dz = \int_a^b f(g(t))g'(t)dt$, now note that the componentwise derivative $[F(g(t))]' = F'(g(t))g'(t) = f(g(t))g'(t)$ by the chain rule. Thus, by Theorem 6 it means $\int_a^b f(g(t))g'(t) dt = F(g(b))-F(g(a))$. Q.E.D.

Let me try to explain what is going on theoretically in the theorem. We cannot simply say suppose $F(z)$ is an analytic function such that $F'(z) = f(z)$ on $\Gamma$ because as explained in the other lecture it makes no sense to say a function differenciable on a boundary of a curve, so it makes no sense to talk about the derivative on $\Gamma$ because it is just a curve not defined around it. Hence, that is why we wrote $F(z)$ is analytic on an open disk containing the curve. As I said before we will not go into an topology to keep this tutorial as simple as possible. But we could have made the theorem a little stronger, instead of saying, analytic on an open disk we could have written analytic on an open set, but we will not worry about that.

Example 17: Suppose we want to find $\int_{\Gamma} \sin z dz$ where $\Gamma$ is a line from $-\pi$ to $\pi$. By Theorem 7 we find an anti-derivative in this case $-\cos z$. Thus, the integral is $-\cos (\mbox{endpoint}) + \cos (\mbox{startpoint}) = -\cos \pi + \cos( -\pi ) = 0$.

We can strengthen Theorem from smooth to piecewise smooth.

Theorem 8: Let $f(z)$ be a continous complex-valued function on a piecewise smooth curve $\Gamma$. Let $g(t)$ be a parametrization of the curve. If $F(z)$ is analytic on an open disk containing $\Gamma$ such that $F'(z)=f(z)$ for all $z$ on $\Gamma$ then $\int_{\Gamma} f(z) = F(g(b))-F(g(a))$.

Proof: Really easy. Let $\{ a=x_0 < x_1 < ... < x_n = b\}$ be the points to break the curve into smooth curves on $(x_0,x_1),...,(x_{n-1},x_n)$. Then by Definition 10 $\int_{\Gamma}f(z) dz = \int_{\Gamma_1}f(z)dz+...+\int_{\Gamma_2}f(z)dz$. Now apply Theorem 7 and we get $[F(g(x_1))-F(g(x_0))]+[F(g(x_2))-F(g(x_1))]+...+[F(g(x_n))-F(g(x_{n-1}))]$ $= F(g(x_n))-F(g(x_0))=F(g(b))-F(g(a))$.

And again we can greatly strengthen the thereom by replacing open disk with open set. But we are being easy on topology here. The theorem is saying exactly the same thing as Theorem 7 but it is adding that the result still works even if you have a curve with sharp (non-differenciable) corners.

Example 18: Suppose $\Gamma$ is the following contour: draw a line segment from $-1$ to $1$ then transverse it counterclockwise in a circle back to $-1$. Let $f(z) = e^z$. We will show that $\oint_{\Gamma}e^z dz = 0$ without any computation. First let $F(z) = e^z$ then $F'(z) = e^z$, but let us be a little bit more formal, let $F(z) = e^z$ on the open disk $|z|<2$ then $F'(z) = e^z$ on $\Gamma$ which is lieing inside the disk. Since we found an anti-derivative everything is easy, $\oint_{\Gamma}f(z) dz = F(\mbox{endpoint}) - F(\mbox{startpoint})$. But the starting point are the same, because this is a closed curve. Thus, $\oint_{\Gamma}e^z dz = 0$.

Example 19: Let $f(z) = 3z^2\cos (z^3)$. Let $\Gamma_1$ be the semi-circular arc from $-i$ to $i$. Let $\Gamma_2$ be the parabolic arc from $-i$ to $i$. We will show that $\int_{\Gamma_1}f(z) dz = \int_{\Gamma_2}f(z) dz$ without any computation. Let $F(z) = \sin (z^3)$ on $|z| < 2$ then $F'(z) = f(z)$ on these curves. Thus, it means $\int_{\Gamma_1}f(z) dz = F(i)-F(-i)$ and $\int_{\Gamma_2}f(z)dz = F(i)-F(-i)$. The exact same value, so it did not depend on the curve between $-i$ and $i$.

We will explain what is going on in Example 18 and 19 and state these results in a Corollary. By Theorem 8 if we can find an anti-derivative then the integral only depends on the starting and ending points not on the curve itself! In Example 18 the curve was a closed curve therefore the starting and ending point were identical. While in Example 19 we had two different curves but they started and ended in the same place. The following Corollaries illustrate this in general.

Corollary 1: Let $f(z)$ be a continous complex-valued function on an open disk $|z| < R$ (for $R>0$) such that there exists an analytic function $F(z)$ so that $F'(z) = f(z)$ for all points $|z| (inside the disk). The function $f(z)$ has the independence of path property within the disk, meaning it does not depend on a curve $\Gamma$ inside $|z| only on the starting and ending points.

Corollary 2: Let $f(z)$ be a continous complex-valued function on an open disk $|z| (for $R>0$) such that there exists an analytic function $F(z)$ so that $F'(z) = f(z)$ for all points $|z| (inside the disk). Then $\oint_{\Gamma}f(z) dz = 0$ for any contour $\Gamma$ lieing within $|z|.

In simplest terms Corollary 2 is saying that if we integrate over closed curves the integral vanishes. Of course, it is not so simple, consider Theorem 5 it does not work. In order for the theorem to work its conditions need to be satisfied. If we had $f(z) = z^{-1}$ then we need to find an anti-derivative $F(z)$ analytic on the full disk satisfing $F'(z) = z^{-1}$. But of course there are problems, it turns out we cannot find such a $F(z)$ on the full disk, and if you want to say, how about $F(z) = \log z$ ? Remember we never defined the complex-logarithm, as said before the true logarithm does not exist in complex analysis because $e^z$ is not one-to-one, so we cannot simply say that. And therefore the Corollay does not apply to $z^{-1}$.

Corollary 2 is supprising it says if we can find an anti-derivative then integration over closed paths is always zero. It turns out this result can be greatly improved. The following theorem is one of the great results of complex analysis (the greatest being the Riemann Mapping Theorem, Riemann's doctrate thesis paper) it is attributed to Augustin Cauchy and it has so many important applications to complex analysis, of course since it is a milestone theorem its proof is not easy, so it will be omitted.

Theorem 9: Let $f(z)$ be analytic on a open disk $|z| (for $R>0$) then for any contour $\Gamma$ lieing in $|z| we have that $\oint_{\Gamma}f(z) dz = 0$.

Note how much stronger this is than Corollary 2, because in the Corollary we require to be able to find an anti-derivative, while Cauchy's theorem says as long as the function is analytic we do not even have to find an anti-derivative we know the integral is zero! What is written above is not the true Cauchy's theorem in its full strength. The true theorem again requires topology, something called a simply connected set. I will try to explain what simple connectivity means geometrically. First a set is connected in $\mathbb{C}$ iff it is all in a single piece. Thus, for example $1<|z|<2$ is an annulus, a circle with its center missing. This set is a connected set because it is in a single piece. However, for a connected set to be a simply connected set we require that it has no holes. For example, $1<|z|<2$ is not a simply connected set because its inside is missing. While $|z|<1$, the open unit disk, is simply connected. But if we remove the origin from the unit disk and form something called a punctured disk i.e. $0<|z|<1$, then this is no longer simply connected. With this notion we can state the true Cauchy theorem: Let $f(z)$ be analytic on an open simply connected set $\Omega$, suppose that $\Gamma$ is a contour lieing wholly in $\Omega$ then $\oint_{\Gamma} f(z) dz = 0$.

We will conclude with an important inequality about contour integrals. First we begin with an easy result.

Theorem 10: Suppose $f:[a,b]\mapsto \mathbb{C}$ is continous function. Then $\left| \int_a^b f(t) dt\right| \leq \int_a^b |f(t)|dt$.

Proof: First for any real-valued continous function $g(t)$ we know that $\left| \int_a^b g(t) dt\right| \leq \int_a^b |g(t)|dt$. We will use this to extend this result for complex-valued functions. First chose such a $\theta$ such that $\left|\int_a^b f(t)dt \right| = e^{i\theta} \int_a^b f(t)dt$. Thus, $\left| \int_a^b f(t)dt\right| = \int_a^b e^{i\theta} f(t)dt = \int_a^b \Re (e^{i\theta}f(t)) dt + i\int_a^b \Im (e^{i\theta}) dt$. Since LHS is a real number it means the imaginary part $\Im$ is zero. Thus, $\left| \int_a^b f(t) dt \right| = \int_a^b \Re (e^{i\theta} f(t)) dt \leq \int_a^b \left| e^{i\theta} f(t) \right| dt$ because for any complex number $z$ we have that $\Re z \leq |z|$. But $\left| e^{i\theta}f(t) \right| = |f(t)|$ because $\cos^2 \theta + \sin^2 \theta = 1$. This means, $\left| \int_a^b f(t) dt \right| dt \leq \int_a^b |f(t)| dt$. Q.E.D.

Before going into the main inequality there is one thing I forgot to mention in the lecture about curves in $\mathbb{C}$, it was length. Suppose $f:[a,b]\mapsto \mathbb{C}$ is a smooth curve with components $g(t),h(t)$ then the length of the curve is $\int_a^b |f'(t)|dt = \int_a^b \sqrt{[g'(t)]^2 + [h'(t)]^2} dt$. This formula should be familar from Calculus.
With that we can state the important inequality, called in some places as the Estimation Lemma.

Theorem 11: Let $g:[a,b]\mapsto \mathbb{C}$ be a smooth curve $\Gamma$. And $f(z)$ a continous complex-valued function defined on $\Gamma$ which is bounded by $A$ i.e. meaning $|f(z)|\leq A$ for all $z$ on $\Gamma$ then: $\left| \int_{\Gamma} f(z) dz \right| \leq A \cdot s$ where $s$ is the arc length of $\Gamma$.

Proof: By definition $\int_{\Gamma} f(z) dz = \int_a^b f(g(t))g'(t) dt$. By Theorem 10 we have that $\left| \int_a^b f(g(t))g'(t)dt \right| \leq \int_a^b |f(g(t))||g'(t)| dt \leq A \int_a^b |g'(t)| dt = A\cdot s$. Q.E.D.
• Jan 10th 2008, 09:00 PM
ThePerfectHacker
In this lecture we will explore a more generalized form of Taylor's series which is called Laurent Series.

First let us state a theorem which is incredible.

Theorem 12: Suppose $f(z)$ is a complex-valued function which is analytic on an open disk $|z| (for $R>0$) then it is infinitely differenciable on $|z| and $f(z) = \sum_{n=0}^{\infty}a_n z^n$ where $a_n = \frac{f^{(n)}(0)}{n!}$.

We will not prove this theorem because it takes a little work but we note how elegant complex analysis is. In ordinary Calculus if a function is differenciable on an open interval then we cannot conclude that it is differenciable even twice. But in complex analysis a function that is differenciable even just once on an open disk is immediately infinitely differenciable! Also in Calculus a function which is even infinitely differenciable does not always converge to its Taylor series (the classic example is $f(x) = e^{-1/x^2}$ for $x\not = 0$ and $f(0)=0$ then $f^{(n)}$ exists everywhere at $0$ and $f^{(n)}(0)=0$ thus its Taylor series is simply $0$ which is clearly not the function). But in the world of complex analysis it does!

Just an interesting fact on this theorem. This theorem is proved using more theory on integration that was discussed in previous. However, using first principles nobody was ever able to prove this result, this is what Lars Alhfors says in his complex analysis book (he won the fields medal in 1936). But it is not that hard once all the necessary results are known.

There is nothing special about the origin we can have a disk centered at $c$ and we get a similar result.

Corollary 3: Suppose $f(z)$ is a complex-valued function which is analytic on the open disk centered at $c$, i.e. on $|z-c| (for $R>0$) then it is infinitely differenciable on $|z-c| and $f(z) = \sum_{n=0}^{\infty} a_n (z-c)^n$ where $a_n = \frac{f^{(n)}(c)}{n!}$.

Example 20: There is really no need to do an example because this is basically Taylor series from Calculus except we have a $z$ instead of an $x$. Let $f(z) = \frac{1}{z}$. We will expand this function in a Taylor series centered at $c=1$. Now at $c=1$ in order for the disk $|z-1| < R$ not to hit the point $z=0$ (because it is not analytic at this point and the theorem is not going to work) the largest value it can be is $R$. I want to mention it is a good idea to draw pictures when doing a complex analysis problem, because complex analysis tends to be very geometric and seeing what you are doing helps a lot. Thus, draw a point at $0$ and draw a point at $1$. Now draw a radius around $1$ in such a way that it does not consume the point $0$ because that is the bad point. You will see the largest it can get is $R=1$. And when $R=1$ we are working on the open disk $|z-1| which does not contain $0$ so the theorem is going to work. Now just compute derivatives. Remeber just like Calculus. We get (details omitted) that $f^{(n)}(z) = (-1)^n n! z^{-(n+1)}$ that means $f^{(n)}(1) = (-1)^n n!$. Thus, $a_n = (-1)^n$. By Corollary 3 we have that: $\frac{1}{z} = \sum_{n=0}^{\infty} (-1)^n (z-1)^n$. And note this series converges to the function thus we can substitute any value inside $|z-1|<1$ and it will work.

The next theorem is a more general form of Taylor's theorem. Which will be important to us.
Before going into the theorem we will start using the term punctured disk which is simply $0<|z| this is a disk centered at the origin with radius $R>0$ and missing the origin.

Theorem 13: Let $f(z)$ be a complex-valued analytic function on the open punctured disk $0<|z| (for $R>0$) then we can write $f(z) = \sum_{n=1}^{\infty} b_n z^{-n} + \sum_{n=0}^{\infty} a_n z^n$, for some coefficients $a_n,b_n$. These coefficients are also unique.

Note that this is a series of positive and negative exponents. Thus, it is basically the Taylor series of a function expect it can have negative exponents too. Note, we will use a more standard notation for that double sum. We can think of this series as $...+b_2z^{-2}+b_1z^{-1}+a_0 + a_1 z + a_2 z^2+...$ rename $b_k$ as $a_{-k}$ thus $...+a_{-1}z^{-1}+a_0+a_1z^1+...$ and we write $\sum_{n=-\infty}^{\infty} a_n z^n$. But the formal meaning of this double infinite series is what was written before, i.e. break it up into positive and negative exponents and think of each as a series. We will just use the single summation from minus infinity to infinity because it takes less space and it is also standard.

Example 21: Let $f(z) = e^{1/z}$ be defined on the open punctured disk $0<|z|<1$. Note, that $f(z)$ is analytic on $0<|z|<1$, but not on $|z|<1$ because if $z=0$ then we have a zero denominator, which is a blashphemous against math. Now we know that $e^w = 1+w+\frac{w^2}{2!}+...$ and thus, $e^{1/z} = 1 + z^{-1} + \frac{1}{2!}z^{-2}+...$. This means if we write, $e^{1/z} = \sum_{n=\infty}^{\infty} a_n z^n$ then $a_1=a_2=...0$ but $a_{-n} = \frac{1}{n!}$ where $n\geq 0$. This is an illustration of the above theorem.

Note, Theorem 13 is stronger then Theorem 12 because it applies to a function defined on $0<|z| while Theorem 12 only applies to a function defined on $|z| and so Theorem 13 works on more general disks. Just note, if it turns out that the function $f(z)$ is actually analytic on the full disk (in Theorem 13) rather than on the punctured disk then its coefficients of the negative terms are zero, so then it becomes the regular Taylor series of a function. Whenever we expand a function into this more general series we shall refer to it as Laurent series of the function.

Example 22: We will compute the Laurent series of $f(z)=z^3\sin \frac{1}{z}$ centered at $z=0$ on $0<|z|<2$. First the function is analytic on this punctured disk, so the theorem assures us it is possible to find the Laurent series. Now we know that $\sin w = \sum_{n=0}^{\infty} \frac{(-1)^n w^{2n+1}}{(2n+1)!}$ that means $\sin \frac{1}{z} = \sum_{n=0}^{\infty} \frac{(-1)^n z^{-(2n+1)}}{(2n+1)!}$. But when we multiply by $z^3$ we get that $f(z) = \sum_{n=0}^{\infty} \frac{(-1)^n z^{-(2n-1)}}{(2n+1)!}$.

Again there is nothing special about expanding the Laurent series around the origin, we can simply shift everything over to a new center $c$. In that case we have:

Corollary 4: Suppose that $f(z)$ is an analytic function on the punctured disk $0<|z-c| (for $R>0$) then we can express $f(z) = \sum_{n=-\infty}^{\infty} a_n (z-c)^n$. These coefficients are unique.

Example 23: Consider $f(z) = z^{-1}$. If we expand this function in a Laurent series around $0$ then there is nothing to do, because it is already in that form. However, say we wish to expand it in a Laurent series around $z=1$. Now let $R>0$ be the radius of the punctured disk $0<|z-1|. The question is how big can be possiblly make $R$. Note if we let $R=2$ then the punctured disk $0<|z-1|<2$ does not satisfy the conditions of Corollary 4, because within this punctured disk the function $f(z)$ is not analytic, since it hits the bad point $z=0$. Draw a picture to help visualize this. It should be obvious that $R=1$ is as large as we can make this circle. Working on this circle we will expand $z^{-1}$ into powers of $(z-1)^n$. Note, $z^{-1} = (1 + (z-1))^{-1}$. This is a geometric series since $|z-1|<1$ and so $(1+(z-1))^{-1} = 1 - (z-1) + (z-1)^2+... = \sum_{n=0}^{\infty}(-1)^n (z-1)^n$. Notice that we did not get any negative terms, because of the comment following Example 21. I just realized that this is identical to Example 20. But at least it is a different way of doing it.

The reason why we wish to expand a function in such a way will become apparent later on.
• Jan 14th 2008, 09:28 PM
ThePerfectHacker
In what follows we will call a point a singularity of a complex-valued function $f(z)$ when the function behaves 'badly' at that point.

Definition 11: Let $f(z)$ be defined around the point $c$ (except possibly on the point itself). The point $c$ is a singularity of $f(z)$ when $\lim_{z\to c}f(z)$ does not exist.

Example 24: Let $f(z) = z^{-1}$ then the point $z=0$ is a singularity because $\lim_{z\to 0}f(z)$ does not exist. Consider the function $g(z) = \left\{ \begin{array}{c}0 \mbox{ if }0<|z|<1\\1 \mbox{ if }z=0 \end{array}\right.$ then eventhough $g(z)$ is not continous at $z=0$ still $z=0$ is not a singularity of $g(z)$ because $\lim_{z\to 0}g(z)$ exists. Note, we can redefine $g(z)$ to a function $h(z) = 0 \mbox{ on }|z|<1$ at the point $z=0$ so that $h(z)$ is continous at $z=0$. Thus, when the limit exists (as above), there is no 'bad behavior' at the point because we can redefine the function at that point to its limit so that it is continous there. Some books call this removable singularity, i.e. the limit exists but not equal to the function value at that point. It is called 'removable' because we can 'remove' that point, redefine it, to make the function continous.

Using Laurent series from the previous lecture we will be able to categorize all the different types of singularities that come up. First if $c$ is a point and $f(z)$ is analytic on $0<|z-c|, for $R>0$, then by Corollary 4 we can write $f(z) = \sum_{n=-\infty}^{\infty} a_n (z-c)^n$ as a Laurent series. Now suppose that all the negative terms $a_{-1},a_{-2},...$ are zero (for example if $f(z)$ is actually analytic on the whole disk $|z-a|). Then $f(z) = \sum_{n=0}^{\infty} a_n (z-c)^n$ and so $\lim_{z\to c} f(z) = a_0$ and henceforth the limit exists. Thus, the point $c$ cannot be a singularity, it can still be a removable singularity as in Example 24, but we do not worry about removable singularities. However, if there exists a non-zero number among the negative exponents $a_{-1},a_{-2},...$, for example, $f(z) = \sum_{n=-1}^{\infty} z^n$ then the limit $\lim_{z\to 0} f(z)$ cannot exist because of the presence of negative exponents and so the function goes to $\pm \infty$. Suppose now there there exists a largest non-zero negative exponent coefficient, i.e. $z_{-k} \not = 0$ and $z_{-(k+1)} = z_{-(k+2)} = ... =0$ for a singularity at a point $c$. Then the function $(z-c)^k f(z) = (z-c)^k \sum_{n=-k}^{\infty} a_n (z-c)^n = \sum_{n=0}^{\infty} a_{n-k}z^n$ on $0<|z-c| < R$ and so $\lim_{z\to c}(z-c)^k f(z) = a_{-k}$ exists. When this situation happens we say the singularity is a pole of order $k$. In fact, a function $f(z)$ has a pole of order $k$ if and only if $k$ is the smallest exponent which makes the limit $\lim_{z\to c}(z-c)^kf(z)$ exist and is finite. However, there is still a possibility that there is no largest non-zero negative coefficient, in that case no matter how big we chose $k$ it is impossible to make $\lim_{z\to c}(z-c)^kf(z)$ exist to a finite number. We call this singularity an essential singularity, and this is the worst possible behaved function.

Example 25: Let $f(z) = z^{-10} e^z$. This function has clearly a singularity at $z=0$ because $\lim_{z\to 0}z^{-10}e^z$ does not exist. Note, if we wish to find the order $k$ of this pole we need to find the smallest positive number $k$ so that $\lim_{z\to c}z^k f(z)$ will exists, and that is of course $k=10$. Or, if we prefer we can expand it in the Laurent series, $z^{-10} \left( 1 + z + \frac{z^2}{2!}+... \right) = z^{-10} + z^{-9} + \frac{z^{-8}}{2!}+...$. It is quickly evident that there is a largest non-zero negative coefficient which is $k=10$ and so this is a pole of order $10$.

Example 26: Let $f(z) = z^{-5} \sin z^2$. Certainly $z=0$ is a singularity. Expand the function into a Laurent series, $z^{-5} \left( z^2 + \frac{z^4}{2!}+... \right) = z^{-3}+...$ and we see that $k=3$ is the order of the pole at $0$.

Example 27: Let $p(z),q(z)$ be non-constant polynomials, and having no common factors. If we define $f(z) = p(z)/q(z)$ then the zeros of $q(z)$ are precisely the singularities of $f(z)$ because when we take the limit $\lim_{z\to c}\frac{p(z)}{q(z)} = \infty$, where $c$ is a zero of $p(z)$, because the numerator is non-zero (they share no common factors) and the denominator is zero. The order of the pole at $c$ is multiplicity of $c$ because that is the number of times the factor $(z-c)$ appears. Thus if we have $f(z) = (z-1)/(z^4+z^2)$ then the first step is to find the zeros of $z^4+z^2$ which are $0,i,-i$. Now $0$ is a pole of order $2$ because it is a zero of multiplicity $2$ while $\pm 1$ are zeros of multiplicty $1$ so their orders are $1$.

Example 28: We will give a classic example of a function that has an essential singularity. Let $f(z) = e^{1/z}$, and clearly $z=0$ is a singularity of the function. But how do we show it is an essential singularity? We need to show no matter what positive integer $k$ we chose it is impossible for $\lim_{z\to 0}z^k f(z)$ to exist. The easier way to prove this to to look at the Laurent series, $e^{1/z} = 1 + \frac{1}{z}+\frac{1}{2!z^2}+...$, we see that the negative exponents have no end to them unlike the the examples above. Which means no matter how large we make $z^k$ we simply cannot clear the denominator of the negative exponents, and thereby making the limit exist. That means $0$ is an essential singularity for $e^{1/z}$.

There are some remarkable properties of essential singularities. We will not need anything we are about to mention, but just for curiosity I will state them. The next theorem is called Weierstrass-Casorati Theorem, the proof is not hard, but I will omit it. Note, I am stating it in the special case when the singularities occur at $z=0$, but in reality it applies equally well at any point $z=c$.

Theorem 14: Let $f(z)$ be analytic on the punctured disk $0<|z| (for $R>0$) and let $0$ be an essential singularity. Then for any $\epsilon > 0$ and for any $r\in \mathbb{C}$ we have that $|f(z_0) - r| < \epsilon$ for some point $z_0$ on the punctured disk.

What the theorem is saying is that if a function has an essential singularity at $0$ then for any complex number $r$ we choose we can make the function $f(z)$ arbitrary close (within $\epsilon$) to $r$.

Now, we reach one of my favorite results in analysis. Many years later after the Casorati-Weierstrass Theorem, the French mathematician Emile' Picard stated and proved that not only does $f(z)$ come arbitrary close to any number, it can be made any number! (With an exception of a single number). This result is even more deep than the Cauchy theorem, and has a more complicated proof. Most books on complex analysis just avoid this theorem at all. John Conway's book on complex analysis gives a proof all the way in the end of his first book on complex analysis, in case the reader is interested. The result is known as Picard's Big Theorem.

Theorem 15: Let $f(z)$ be analytic on the punctured disk $0<|z| (for $R>0$) and let $0$ be an essential singularity. Then for any for any $r\in \mathbb{C}$, with an exception one number, there exists $z_0$ such that $f(z_0) = r$.

We finally know enough complex analysis to start computing contour integrals using infinite series instead of how it was done previously using the definition of a contour integral. And this is what makes it possible to evaluate certain real integrals with the aid of complex analysis. That is the next lecture.
• Jan 19th 2008, 08:27 PM
ThePerfectHacker
We discussed how to compute contour integrals. These computations required being able to find anti-derivatives like in ordinary Calculus. But all of this is about to change. Cauchy's closed curve theorem says that if $f(z)$ is analytic on a disk and $\Gamma$ is a contour lieing wholly within the disk then the round integral of $f(z)$ over $\Gamma$ is zero. Thus, this means that if the function is analytic there is no need to compute the integral directy via definition because the theorem tells us it is zero. However, if the function is not fully analytic on the whole disk then it might not be zero. Consider Theorem 5, it tells us that the integral of $f(z) = z^{-1}$ on $|z|=1$ is $2\pi i$. This does not in any way contradiction Cauchy's theorem because $f(z)$ is not analytic on a disk containing this curve, say $|z|<2$, at $z=0$ we have a problem. This is where our discussion on Laurent series and singularities will come in, because if there are no singularities of the function then Cauchy's theorem tells us the integral vanishes, otherwise we need to deal with singularities and that is were things get interesting.

It turns out that to do contour integrals we do not need to know much about anti-derivatives at all! What we need to know is Theorem 5. We will show how everything can be simplified to that important integral. But first we will greatly strengthen Theorem 5, instead of integration over circles we will show it still works even if we integrate over any contour.

Theorem 16: Suppose $\Gamma$ is any contour which contains $0$ within it, then $\oint_{\Gamma} \frac{dz}{z} = 2\pi i$.

Proof: We shall use Theorem 9 - Cauchy's theorem, to prove this marvelous result. Since $\Gamma$ is a contour we can pick a radius $R>0$ large enough so that the open disk $|z| will contain $\Gamma$. See picture below. We are working inside this big green circle. $\Gamma$ is the black contour containing the red origin. Now since $0$ is lying wholly within the contour we can draw a small blue circle $|z|=r$ that is wholly contained within $\Gamma$. We connect the circle $|z|=r$ and the contour $\Gamma$ with two vertical brown line at $A,B,C,D$. The arrows on $\Gamma$ and on $|z|=r$ are showing the counterclockwise orientation. Now the brown lines divide $\Gamma$ into two pieces, the upper curve and the lower cuvrve, likewise $|z|=r$ gets divided similarly. Let $\Gamma_u$ be the upper piece and $\Gamma_l$ be the lower piece. Let $C_u$ be the upper piece of the blue circle and $C_l$ be the lower piece of the blue circle. Consider the following contour: start at $A$ travel to $B$ travel along the blue circle to $C$ (and note we are now going against the arrows) travel to $D$ and return back to $A$ along $\Gamma$ (and note we are now going with the arrows). This newly formed contour satisfies Cauchy's theorem because $z^{-1}$ fails to be only analytic at $z=0$ and we are away from $0$ and so:
$\int_{[A,B]}\frac{dz}{z} - \int_{C_u} \frac{dz}{z} + \int_{[C,D]} \frac{dz}{z} + \int_{\Gamma_u} \frac{dz}{z} = 0$.
Note $C_u$ is negative because we travelled against the arrows, and $[A,B]$ represent the line segment joining those two point.
Now we form a second contour: start at $A$ travel to $D$ along $\Gamma_l$ (and note we are going with the arrows) travel to $C$ travel to $B$ along $C_l$ (and note we are going against the arrows) travel back to $A$. This too satisfies Cauchy's theorem and we get:
$\int_{\Gamma_l}\frac{dz}{z} - \int_{[C,D]} \frac{dz}{z} - \int_{C_l}\frac{dz}{z} - \int_{[A,B]} \frac{dz}{z} = 0$.
Note, instead of writing $[B,A]$ we wrote $[A,B]$ because it is the same integral except with a negative sign.
Now add these equations, notice that $\Gamma_l$ and $\Gamma_u$ combine into $\Gamma$ while similarly $C_l$ and $C_u$ combine into $|z|=r$. This means,
$\oint_{\Gamma} \frac{dz}{z} - \oint_{|z|=r}\frac{dz}{z} = 0$.
But, $\oint_{|z|=r}\frac{dz}{z} = 2\pi i$ by Theorem 5. Thus, we have finally proved that $\oint_{\Gamma} \frac{dz}{z} = 2\pi i$. (I have to apologize, I used the stronger version of Cauchy's theorem here: the round integral of an analytic function of a contour within a open simply connected set vanishes. Here I used the the upper semicircle $\{ |z| < 1 : \Im z > 0 \}$ as the simply connected set and for the second integral the lower semicircle).

Theorem 17: Suppose $\Gamma$ is any contour that contains $0$ within it, and if $n\not = -1$ is an integer then $\oint_{\Gamma}z^n dz = 0$.

Proof: We procede with Corollay 2, however we will use a slightly stronger version. In the Corollay we use the open disk but it still works on an open punctured disk (in fact is works on any open set). Let $F(z) = (n+1)^{-1} z^{n+1}$ then $F'(z) = z^n$ for all points $z\not = 0$. Thus, it has an anti-derivative and so the integral vanishes.

We can combine Theorem 16 and Theorem 17 into a single theorem, and in fact it works around any point, there is nothing special about the origin. If we simply shift over the contour to a new center $c$ then we have the following result.

Corollary 5: Let $\Gamma$ be any contour containing $c$ then $\oint_{\Gamma} (z-c)^n dz = 0$ unless $n=-1$ in which case the integral is $2\pi i$.

The most basic functions are: $e^z,\sin z,\cos z, \sinh z,\cosh z$ and polynomials. Using these functions we can create many new functions by addition, subtraction, multiplication, division, and composition. We just need to watch out for division because we can get a a zero denominator, thus we might introduce singularities into this function, that is were out discussions on singularities and poles will come in. But this function is not necessary so badly behaved because it is differenciable still everywhere where there is no pole. These class of functions are called meromorphic functions and all the examples we will deal with shall be meromorphic functions.

Definition 12: A complex-valued function $f(z)$ is said to be meromorphic on a open disk $|z| (for $R>0$) when it is analytic everywhere except its poles (or removable singularities).

Example 29: The function $f(z) = \frac{e^z}{z}$ is mermomorphic on $|z|<1$ because the only singularity is at $z=0$ while at any other point it is differenciable. Likewise, the function $f(z) = \frac{\sin z}{z}$ is meromorphic on this disk because $z=0$ is a singularity (actually a removable singularity) and it is differenciable everywhere else.

Here is a key result which shows that in contour integration all we need to know is Corollay 5.

Theorem 18: Let $f(z)$ be a complex-valued function analytic on the punctured disk $0<|z-c| (for $R>0$) if $\Gamma$ is any contour lying within this punctured disk containing the center, $c$, then $\oint_{\Gamma} f(z) dz = 2\pi i \cdot a_{-1}$. Where $a_{-1}$ is the coefficient of $(z-c)^{-1}$ in the Laurent expansion.

Proof: By Corollay 4 we can write $f(z) = \sum_{n=-\infty}^{\infty} a_n (z-c)^n$ for all $0<|z-c|. Thus, $\oint_{\Gamma}f(z) dz = \oint_{\Gamma} \sum_{n=-\infty}^{\infty} a_n(z-c)^n dz$. (It can be shown that we can interchange the order of integration with the order of summation, in advanced calculus this is based on a notion called uniform convergence, we will not go into the concept of uniform convergence because we will not need it any longer, however, the reader can accept that it is always permissable to do this when dealing with power series, it does not work in general though for non-power series summations). Pass the integral inside and intergrate term-by-term, $\sum_{-\infty}^{\infty} \oint_{\Gamma} a_n (z-c)^n dz$ by Corollary 5 everything vanishes except $n=-1$ in which case we get $\oint_{\Gamma} a_{-1} (z-c)^{-1} = 2\pi i \cdot a_{-1}$.

Example 30: Let $f(z) = e^{1/z}$ and let $\Gamma$ be the circle $|z|=1$. Then by Theorem 18 the only thing we need to do is find the $a_{-1}$ term when we expand $f(z)$ about $0$, that term is, $1$. This means $\oint_{|z|=1} e^{1/z} dz = 2\pi i$. In fact, it does not matter what $\Gamma$ is, it can be any ellipse that contains $0$ the integral shall still be $2\pi i$.

The above theorem and example illustrate that we need not know how to find the anti-derivative all we need is the $a_{-1}$ coefficient. This coefficient is special and is called the residue.

Definition 13: Let $f(z)$ be a complex-valued function which is analytic on the punctured disk $0<|z-c| (for $R>0$) we call the $a_{-1}$ coefficient of the Laurent expansion to be the residue for $f(z)$ as $c$. We will denote it by $\mbox{Res}_{z=c}f(z)$.

Theorem 18 can be greately strengthened. The next theorem is the ultimate result that we wished to reach which is called The Residue Theorem.

Theorem 19: Let $f(z)$ be a meromorphic function on the disk $|z| < R$ (with $R>0$) having finitely many singularities $c_1,c_2,...,c_n$. Let $\Gamma$ be a contour which contains within it all these singularities then $\oint_{\Gamma} f(z) dz = 2\pi i \sum_{k=1}^n \mbox{Res}_{z=c_k}f(z)$.

Proof: The proof is supprisingly simple because the hard part was Corollay 5 that does all the work. We procede by induction on the number of singularities. If $n=1$ then that is Theorem 18 which we already proved. Suppose it is true for $n$ points we will prove it is true for $n+1$ points. Look at the picture below. We have $n=4$ over here (red points) and we want to prove it for $n=5$. The fifth point is the blue point. We draw a green curve segment that partitions the contour $\Gamma$ into the left contour $\Gamma_1$ and the right contour $\Gamma_2$. And note that $\oint_{\Gamma_1}f(z) dz + \oint_{\Gamma_2}f(z) dz = \oint_{\Gamma} f(z)dz$ because in $\Gamma_1$ the green segment is taken in the opposite direction than in $\Gamma_2$ (because contours are taken conterclockwise). But the integral around $\Gamma_1$ is $2\pi i \mbox{Res}_{z=c_1}f(z)$ and the integral around $\Gamma_2$ is $2\pi i\sum_{n=2}^{n+1} \pi i \mbox{Res}_{z=c_k}f(z)$ add them and get that $\oint_{\Gamma}f(z)dz = 2\pi i \sum_{k=1}^{n+1} \mbox{Res}_{z=c_k}f(z)$.

Example 31: We will illustrate of the Residue Theorem. Let $f(z) = \frac{1}{z^2+1}$. And let $\Gamma$ be the semi-circle from $-2$ to $2$ transversed conterclockwise back to $-2$. The function has singularities at $z=\pm i$. But $z=-i$ is not included within the contour and so $z=i$ is the only singularity we need to worry about. To find the integral we need to compute $\mbox{Res}_{z=i}f(z)$. To do this we need to find the $a_{-1}$ coefficient in the Laurent expansion centered at $z=i$. Note, $\frac{1}{z^2+1} = \frac{1}{z-i}\cdot \frac{1}{z+i} = \frac{1}{z-i} \cdot \frac{1}{2i+(z-i)} = \frac{1}{z-i} \sum_{n=0}^{\infty} \frac{1}{2i} \cdot (-1)^n\left( \frac{(z-i)}{2i} \right)^n$. Thus, $1/(2i)$ is the residue. And therefore the integral is $2\pi i \cdot 1/(2i) = \pi$.

Note that using Laurent series can get complicated as even as simple as Example 31. In the next lecture we will show how to find residues using different methods besides for Laurent series.
• May 24th 2008, 05:41 PM
ThePerfectHacker
We will develope more methods for computing residues rather than using Laurent series.

Theorem 20: If $f(z)$ has a pole of order 1 at $c$ then $\mbox{Res}_{z=c}f(z) = \lim_{z\to c}(z-c)f(z)$.

Proof: Since the pole has order $1$ it means we can write $f(z) = a_{-1}(z-c)^{-1}+a_0 + a_1(z-c)+...$ multiply by $z-c$ and we get $(z-c)f(z) = a_{-1}+a_0(z-c)+...$ take the limit $z\to c$ and we get $\lim_{z\to c}(z-c)f(z) = a_{-1}$ which is the residue.

Example 32: Consider $f(z) = \frac{1}{\sin z}$ it has a singularity at $z=0$. This singularity has order 1 because $\lim_{z\to 0}\frac{z}{\sin z} = 1$ exists, and furthermore $\mbox{Res}_{z=1}f(z) = 1$.

This theorem is gets more general.

Theorem 21: If $f(z)$ has a pole of order $k$ at $c$ then $\mbox{Res}_{z=c}f(z) = \lim_{z\to c}\frac{1}{(k-1)!} \cdot \left[ (z-c)^kf(z) \right]^{(k-1)}$, where the raised exponent $(k-1)$ represents differenciation $k-1$ times repeatedly.

Proof: If $f(z)$ has a pole of order $k$ at $c$ it means we can write $f(z) = a_{-k}(z-c)^{-k} + a_{-k+1}(z-c)^{-k+1}+...+a_{-1}(z-c)^{-1}+a_0+a_1(z-c)+...$. Thus, $(z-c)^kf(z) = a_{-k}+a_{-k+1}(z-c)+...+a_{-1}(z-c)^{k-1} + a_0(z-c)^k+...$. Now apply differenciation $k-1$ times and we get, $[(z-c)^kf(z)]^{(k-1)} = (k-1)!a_{-1}+k!a_0(z-c)+...$. Take the limit as $z\to c$ and we get $\lim_{z\to c}[(z-c)^kf(z)]^{(k-1)} = (k-1)! a_{-1}$ (the reason why we cannot merely evaluate both sides at $c$ is because the function is not defined at $c$, so we need to use the limit instead). And that gives us the formula above.

Example 33: Consider $f(z) = \frac{1}{\sin^2 z}$. Again $z=0$ is a singulairty. The order of the singularity is $k=2$ because $2$ is the smallest exponent such that $\lim_{z\to 0}z^2f(z)$ exists. Note, to use Theorem 21 we need to identity the order first, in this case the order is $2$. The next step is to form $z^2f(z) = z^2/\sin^2 z$. Compute the derivative $2-1=1$ times, which is $\frac{2z\sin^2 z - 2z^2\sin z\cos z}{\sin^4 z}$ and as $z\to 0$ we get (best way to compute limit is to use Taylor series) $0$ which means the residue is $0$.

Theorem 22: Suppose $f(z) = \frac{g(z)}{h(z)}$ where $g(z)$ and $h(z)$ are analytic functions (around the point $c$) with $h(c) = 0$ (and non-zero around the point) and $g(c) \not = 0$ and $h'(c) \not = 0$ then the residue at $c$ is $\frac{g(c)}{f'(c)}$.

Proof: It is trivial to see that $z=c$ is a singularity because the denominator is going to $0$ while the numerator is going to a non-zero number as we take a limit. Use will use Theorem 20 and claim that $c$ is a simple pole, if we can show that then the limit is the residue. Now, $\lim_{z\to c} (z-c)\cdot \frac{g(z)}{h(z)} = \lim_{z\to c} \frac{z-c}{h(z) - h(c)} \cdot g(z) = \frac{g(c)}{h'(c)}$ (because (i) $h(c) = 0$ (ii) the limit is the derivative (iii) the derivative is non-zero so the limit exists).

Example 34: Consider $f(z) = \frac{e^{iz}}{\sin z}$ at $z=\pi$ we have a singularity. The denominator $g(z) = \sin z$ and $g'(z) = \cos z$ with $g'(\pi )\not = 0$ and $e^{\pi i}\not = 0$ which means the residue by Theorem 22 is $(-1)e^{\pi i} = 1$.

The above theorems and examples illustrate that one does not need to use Laurent series to compute resides, there might be different ways of doing that which are easier. It often helps to be able to identity the order of the pole, if the pole is order one then the best approach is to use Theorem 20. If the pole is not one we should try to identity the order of the pole. Once we identity the order we can use 21, but be warned this can be very computational. However, sometimes instead of using Theorem 21 one might still use Laurent series. Laurent series is not always the best way to go, but sometimes it happens to be a big timesaver.

Example 35: Consider the function $f(z) = \frac{1}{z^2\sin z}$. This function has a triple pole at $0$. Using Theorem 21 is a nightmare, we need to compute a quotient derivative twice. It turns out that Laurent series is the way to go. Note, $\frac{1}{z^2} \cdot \frac{1}{\sin z} = \frac{1}{z^2} \cdot \frac{1}{z - \frac{z^3}{3!}+...}$. Now use polynomial division on the infinite sum. We get, $\frac{1}{z^2}\cdot \left( \frac{1}{z}+\frac{z}{3!}+... \right)$ and so $\mbox{Res}_{z=0}f(z) = \frac{1}{6}$.

The final comment to remember is if the function has an essential singularity then none of those theorems are going to work, Laurent series is the only way. With these theorems we can try evaluating some contour integrals.

Example 36: Let $R>1$. Now let $\Gamma$ be the contour from $-R$ to $R$ transversed counterclockwise in a circle back to $-R$ (so we have a semi-circle). We will compute the integral $\oint_{\Gamma}\frac{dz}{z^4+1}$.
The first step is to identity the poles, those are when $z^4+1=0$ and $\Im (z) > 0$ (because we are working in the upper-half plane and the radius of the circle is large enough to contain the poles). Those solutions are $z_0=e^{i\pi/4}$ and $z_1=e^{3\pi i/4}$. To compute the residue we use Theorem 22.
Thus, $\mbox{Res}_{z=z_0}f(z) = \frac{1}{4e^{3\pi i/4}} = \frac{1}{4}e^{-3\pi i/4} = -\frac{1}{4\sqrt{2}} - \frac{1}{4\sqrt{2}}i$. And $\mbox{Res}_{z=z_1} f(z) = \frac{1}{4e^{9\pi i/4}} = \frac{1}{4e^{\pi i/4}} = \frac{1}{4}e^{-i\pi/4} = \frac{1}{4\sqrt{2}} - \frac{1}{4\sqrt{2}}i$. By residue theorem we add up those residues and multiply by $2\pi i$. Thus, we final answer is $\frac{\pi}{\sqrt{2}}$.

Example 37: Let $\Gamma$ be a contour which contains $\pm i$ inside of it. We will compute $\oint_{\Gamma}\frac{\cos (\pi z)}{z^2+1} dz$. Again the way to approach this problem to is identity the poles, those are $\pm i$. Since the poles have order one we will use Theorem 20. Thus, $\mbox{Res}_{z=i}f(z) = \lim_{z\to i}(z-i)\cdot \frac{\cos (\pi z)}{(z+i)(z-i)} = \frac{\cos (\pi i)}{2i} = \frac{1}{2i}\cosh (\pi)$.
Similarly, $\mbox{Res}_{z=-i} f(z) = \lim_{z\to -i}(z+i)\cdot \frac{ \cos (\pi z)}{(z+i)(z-i)} = \frac{\cos (-\pi i)}{-2i} = -\frac{1}{2i}\cosh (\pi)$. We add up the residues and get zero. Thus, this integral is zero. That is not a problem, because a function which has no poles gives an integral with value zero, but conversely, i.e. if the integral has value zero does not mean that the function must be pole-free.

We are finally in the position to see how contour integrals can help solve real integrals! That is the next lecture.
• May 25th 2008, 01:30 PM
ThePerfectHacker
The methods of contour integrals about to be presented is quite general when it comes to computing certain integrals and series. While some integral tricks such as expressing an integral as a double integral, or introducing infinite series are important too, the contour integration approach is more general, however it is not foolproof like the other methods. One should learn how to combine all known integration approaches and when each one works for best results.

The remaining part of this tutorial will be devoted to computing real integrals and infinite series. It will be divided into many parts, in each part of the turorail a new idea and concept will be explored. This lecture will be on a simple contour which is very effective when used properly. We will give it a name, the semi-circle contour. See below.

Before going into calculations we will state a simple observation. If $|f(x)| \to 0$ then $f(x)\to 0$. Sometimes when we want to prove a function approaches zero it can be easier to prove the absolute value of the function approaches zero. We will use this simple fact constantly.

The second observation is the issue of convergence. The tutorial does not assume the reader studied basic analysis. The reader probably knows that Taylor series (or infinite series) can be used to approximate a function. However, in what we do we will not just say the integral approximates a value we will actually prove that the approximation works. For example, we will show $1 - \frac{1}{3}+\frac{1}{5}- ... = \frac{\pi}{4}$. Let $f(x) = \frac{1}{x^2+1}$. Now $1-x^2+x^4-...+(-1)^nx^{2n} = \frac{1}{x^2+1} - \frac{(-1)^{n+1}x^{2n+2}}{x^2+1}$ for all $x$ by simply using geometric series. Integrate both sides from $0$ to $1$ and we get: $\sum_{k=0}^n \frac{(-1)^k}{2k+1} = \frac{\pi}{4} - \int_0^1 \frac{(-1)^{n+1} x^{2n+2}}{x^2+1}$. Thus, $\left| \sum_{k=0}^n \frac{(-1)^k}{2k+1} - \frac{\pi}{4} \right| = \left| \int_0^1 \frac{x^{2n+2}}{x^2+1} dx \right| = \int_0^1 \frac{x^{2n+2}}{x^2+1}dx$ (the negative signs all go away because of the absolute value). Look at this equation. If for large values of $n$ the integral on the right hand side is really small then it is telling us that that the finite sum $\sum_{k=0}^{n}\frac{(-1)^k}{2k+1}$ is really close to $\frac{\pi}{4}$. We just have to show the integral gets smaller and smaller as $n\to \infty$. The problem is we cannot compute it in a pleasant way. But that is no problem, we can put a bound on it. Note, $\frac{x^{2n+2}}{x^2+1} \leq x^{2n+2}$ for all $0\leq x\leq 1$. Thus, $\int_0^1 \frac{x^{2n+2}}{x^2+1} dx \leq \int_0^1 x^{2n+2}dx = \frac{1}{2n+3}$. Thus, $\left| \sum_{k=0}^n \frac{(-1)^k}{2k+1} - \frac{\pi}{4} \right| \leq \frac{1}{2n+3}$. But $\frac{1}{2n+3} \to 0$ as $n$ gets larger. Thus, $\left| \sum_{k=0}^n \frac{(-1)^k}{2k+1} - \frac{\pi}{4} \right| \to 0$. Using the first observation it tells us (removing absolute value) that $\sum_{k=0}^n \frac{(-1)^k}{2k+1} - \frac{\pi}{4} \to 0$ as $n\to \infty$. Thus, this means $\lim ~ \sum_{k=0}^n \frac{(-1)^k}{2k+1} = \frac{\pi}{4}$, which finally means $\sum_{k=0}^{\infty} \frac{(-1)^k}{2k+1} = \frac{\pi}{4}$. (Whew)

The integral which we placed the bound on is known as the remainder term because it was telling us how accurate the finite sum was to $\frac{\pi}{4}$. We will be doing this type of argument many times, so we should get comfortable with it.

The third observation will be about integrals of the form $\int_{-\infty}^{\infty}f(x)dx$. We will state right now that $\lim_{R\to \infty}\int_{-R}^R f(x) dx \not = \int_{-\infty}^{\infty} f(x)dx$. For example, $\lim_{R\to \infty} \int_{-R}^R \sin x dx = 0$ but $\int_{-\infty}^{\infty} \sin x$ does not exist. However, if the integral converges then the two notions are the same. Meaning if $\int_{-\infty}^{\infty} f(x) dx$ exists then $\lim_{R\to \infty}\int_{-R}^R f(x) dx = \int_{-\infty}^{\infty} f(x) dx$.

Example 38: We will compute the integral $\int_{-\infty}^{\infty} \frac{dx}{x^2+1}$.
Let $f(z) = \frac{1}{z^2+1}$. And let $\Gamma$ be the semi-circle contour (see below) with $R>1$.
Next we calculate $\oint_{\Gamma}\frac{dz}{z^2+1}$. Whenever we do these types of problems we will calculate the integrals in two ways: (i) by using the residue theorem (ii) by using the definition of contour integration. Using the residue theorem, there is just one pole which is at $i$, and the residue at that point is $\lim_{z\to i}(z-i)\cdot \frac{1}{(z+i)(z-i)} = \frac{1}{2i}$. Thus, by the residue theorem the integral is $2\pi i \cdot \frac{1}{2i} = \pi$. The second way we calculate this is by definition. First we break up the contour $\Gamma$ into $[-R,R]$ (the line from $-R$ to $R$) and $\sigma$ the semi-circle path.
Thus, (by Definition 10) we have:
$\int_{\Gamma}f(z) dz = \int_{[-R,R]}f(z)dz + \int_{\sigma}f(z)dz$.
To compute $\int_{-R}^R f(z) dz$ we choose a parametrization, say $g(t) = t$ for $-R\leq t\leq R$ and we get $\int_{[-R,R]}f(z) dz = \int_{-R}^R f(x) dx = \int_{-R}^R \frac{dx}{x^2+1}$.
To compute $\int_{\sigma}f(z)dz$ we choose a parametrization, say $h(\theta) = Re^{i\theta}$ for $0\leq \theta \leq \pi$ and we get $\int_{\sigma}f(z) dz = \int_0^{\pi} f(h(\theta))h'(\theta) d\theta = \int_0^{\pi} \frac{Rie^{i\theta}}{R^2 e^{2i\theta} + 1} d\theta$.
Now comparing results by residue theorem we get,
$\pi = \int_{-R}^R \frac{dx}{x^2+1} + \int_0^{\pi} \frac{Rie^{i\theta}}{R^2 e^{2i\theta} + 1} d\theta \implies \left| \int_{-R}^R \frac{dx}{x^2+1} - \pi \right| = \left| \int_0^{\pi} \frac{Rie^{i\theta}}{R^2e^{2i\theta}+1}d\theta \right|$.
Our last step we will show the remainder term (integral on RHS) will go to zero as $R\to \infty$. Then that will tell us that $\lim_{R\to \infty} \int_{-R}^R \frac{dx}{x^2+1} = \pi$.
To show this integral goes to zero it is sufficient to show its absolute value goes to zero (by first observation). We need a way to approximate this integral, because computing it is not pleasant. We use Theorem 10. This means,
$\left| \int_0^{\pi} \frac{iRe^{i\theta}}{R^2e^{2i\theta} + 1}d\theta \right|\leq \int_0^{\pi} \left| \frac{iRe^{i\theta}}{R^2e^{2i\theta}+1}\right| d\theta = \int_0^{\pi} \frac{R}{|R^2e^{2i\theta}+1|}d\theta$.
This integral looks better but still not that pleasant to compute.
We will use a nice easy inequality: $||z_1|-|z_2||\leq |z_1-z_2|$ (for all complex $z_1,z_2$).
The denominator is $|R^2 e^{2i \theta }+1| = |R^2 e^{2i \theta} - (-1)| \geq ||R^2e^{2i\theta}| - |1|| = |R^2 - 1| = R^2 - 1$ (since $R>1$).
This means, $\frac{R}{|R^2 e^{2i\theta} + 1|} \leq \frac{R}{R^2-1}$ because the denominator on RHS is smaller, so we are dividing by a smaller number!
Finally, $\int_0^{\pi} \frac{R}{|R^2e^{2i\theta} + 1|}d\theta \leq \int_0^{\pi} \frac{R}{R^2-1} d\theta = \frac{\pi R}{R^2-1}$.
After all of that this tells us that,
$\left| \int_{-R}^R \frac{dx}{x^2+1} - \pi \right| \leq \frac{\pi R}{R^2-1}$.
But $\frac{\pi R}{R^2 - 1}\to 0$ as $R\to \infty$.
And thus, $\lim_{R\to \infty} \int_{-R}^R \frac{dx}{x^2+1} = \pi$.
However, the integral $\int_{-\infty}^{\infty} \frac{dx}{x^2+1}$ converges and so by the third observation it means $\int_{-\infty}^{\infty} \frac{dx}{x^2+1} = \pi$.

This was of course and easy integral. And there is no need to do all of what we did to compute it. But we did an easy example to illustrate how this method works. We will use the semi-circle contour in the next lecture with more interesting integrals.
• May 26th 2008, 02:45 PM
ThePerfectHacker
Example 38: We will compute the integral $\int_0^{\infty}\frac{dx}{x^4+1}$.
This will be very similar to Example 37, so we will not put as much detail as we did in Example 37.
Define $f(z) = \frac{1}{z^4+1}$ and let $\Gamma$ be the semi-circle contour.
By the residue theorem, see Example 36, we get:
$\oint_{\Gamma}\frac{dz}{z^4+1} = \frac{\pi}{\sqrt{2}}$.
By definition of contour integration we get:
$\oint_{ \Gamma } \frac{ dz }{z^4+1} = \int_{-R}^R \frac{dx}{x^4+1} + \int_0^{ \pi } \frac{ R i e^{ i\theta } }{ R^4e^{4i\theta}+1 }d\theta$.
Thus,
$\left| \int_{-R}^R \frac{dx}{x^4+1} - \frac{\pi}{\sqrt{2}}\right| = \left| \int_0^{\pi} \frac{R i e^{i\theta}}{R^4 e^{4i\theta}+1} d\theta \right|$.
We need to show,
$\left| \int_0^{\pi} \frac{Ri e^{i\theta}}{R^4 e^{4i\theta}+1}d\theta \right| \to 0$
We bound the integral,
$\left| \int_0^{\pi}\frac{Rie^{i\theta}}{R^4e^{4i\theta}+1 } \right| \leq \int_0^{\pi} \left| \frac{Rie^{i\theta}}{R^4e^{4i\theta}+1} \right| d\theta = \int_0^{\pi} \frac{R}{|R^4e^{4i\theta}+1|} d\theta$.
But, $|R^4e^{4i\theta}+1| = |R^4 e^{4i\theta}-(-1)| \geq ||R^4e^{4i\theta} - |1|| = |R^4 - 1| = R^4-1$.
Thus,
$\frac{R}{|R^4e^{4i\theta}+1|} \leq \frac{R}{R^4-1} \implies \int_0^{\pi} \frac{R}{|Re^{4i\theta}+1|} d\theta \leq \int_0^{\pi}\frac{R}{R^4-1} d\theta = \frac{\pi R}{R^4-1}$.
But, $\frac{\pi R}{R^4-1} \to 0$ as $R\to \infty$.
This means,
$\lim_{R\to \infty}\int_{-R}^R \frac{dx}{x^4+1} = \frac{\pi}{\sqrt{2}}$.
Since, $\int_{-\infty}^{\infty}\frac{dx}{x^4+1}$ converges it means $\int_{-\infty}^{\infty} \frac{dx}{x^4+1} = \frac{\pi}{\sqrt{2}}$.
And thus, $\int_0^{\infty}\frac{dx}{x^4+1} = \frac{1}{2}\int_{-\infty}^{\infty}\frac{dx}{x^4+1} = \frac{\pi}{2\sqrt{2}}$.

The above example is not hard at all. The most difficult part is showing that the remainder term goes to zero in the limit. Also the above example illustrates that if we can to find integrals from $0$ to $\infty$ involving even functions we can use a semi-circle contour and take half of the integral from $-\infty$ to $\infty$. The nice feature about Examples 37 and 38 is that they are very general. Meaning we used the same ideas in both examples. Let us look at another example.

Example 39: We will compute $\int_0^{\infty} \frac{\cos x}{x^2+a^2}dx$ where $a>0$.
Since the function is an even function we will use our typical semi-circle contour and take one half of the integral from $-\infty$ to $\infty$.
You might be tempted to define $f(z) = \frac{\cos z}{z^2+a^2}$. However, this is not a good function, whenever dealing with sines and cosines it is usually better to use $f(z) = \frac{e^{iz}}{z^2+a^2}$.
Let $\Gamma$ be the semi-circle countor with $R>a$, so that it contains the pole $ai$ in the upper plane.
Then we have,
$\oint_{\Gamma} \frac{e^{iz}}{z^2+a^2} dz = \int_{-R}^R \frac{e^{ix}}{x^2+a^2} dx + \int_0^{\pi} \frac{e^{i(Re^{i\theta})}iRe^{i\theta}}{R^2 e^{2i\theta} + a^2} d\theta$.
Using residue theorem we get (details omitted),
$\left| \int_{-R}^R \frac{e^{ix}}{x^2+a^2}dx - \frac{\pi}{a}e^{-a} \right| = \left| \int_0^{\pi} \frac{e^{i(Re^{i\theta})}iRe^{i\theta}}{R^2 e^{2i\theta} + a^2} d\theta \right| \leq \int_0^{\pi} \left|\frac{e^{i(Re^{i\theta})}iRe^{i\theta}}{R^2 e^{2i\theta} + a^2}\right|d\theta$.
Note,
$\left| \frac{e^{i(Re^{i\theta})} i Re^{i\theta}}{R^2 e^{2i\theta} + a^2} \right| \leq \frac{R|e^{i(R\cos\theta + iR\sin \theta)}|}{R^2 - a^2} = \frac{Re^{-R\sin \theta}}{R^2 - a^2}$.
Thus it remains to prove,
$\lim_{R\to \infty}\int_0^{\pi} \frac{Re^{-R\sin \theta}}{R^2 - a^2} d\theta = 0$.
This integral is not easy to compute, so we put a bound on it, note $e^{-R\sin \theta} \leq 1$ for $0\leq \theta \leq \pi$, thus,
$\int_0^{\pi} \frac{Re^{-R\sin \theta}}{R^2 - a^2} d\theta \leq \int_0^{\pi} \frac{R}{R^2 - a^2} = \frac{\pi R}{R^2 - a^2} \to 0$ as $R\to \infty$.
This finally means,
$\int_{-\infty}^{\infty} \frac{e^{ix}}{x^2+a^2} = \frac{\pi}{a}e^{-a}$.
Equate real and imaginary parts to get,
$\int_{-\infty}^{\infty} \frac{\cos x}{x^2+a^2}dx = \frac{\pi}{a}e^{-a} \mbox{ and }\int_{-\infty}^{\infty} \frac{\sin x}{x^2+a^2} dx = 0$
The second integral is obvious because of the odd function, the second one is interesting, so,
$\int_0^{\infty} \frac{\cos x}{x^2+a^2} dx = \frac{\pi}{2a}e^{-a}$.

In Example 39 proving that the remainder term converges to 0 was straightforward. Sometimes it takes more work to prove that the remainder term converges to 0 by replacing a function, not by a constant as above, but by another easier to integrate function. The following example addresses it.

Example 40: We will compute $\int_{-\infty}^{\infty} \frac{x\sin x}{x^2+a^2}dx$ where $a>0$.
We will define $f(z) = \frac{ze^{iz}}{z^2+a^2}$.
The function $f(z)$ has a pole (in upper plane) at $ai$ with $\mbox{Res}_{z=ai}f(z) = \lim_{z\to ai}(z-ai)\cdot \frac{ze^{iz}}{z^2+a^2} = \frac{1}{2}e^{-a}$.
This means, by out typical argument,
$\pi ie^{-a} = \int_{-R}^R \frac{xe^{ix}}{x^2+a^2}dx + \int_0^{\pi} \frac{iR^2 e^{i\theta} e^{i(Re^{i\theta})} }{R^2 e^{2i\theta} + a^2} d\theta$.
We will show,
$\left| \int_0^{\pi} \frac{iR^2 e^{i\theta} e^{i(Re^{i\theta})} }{R^2 e^{2i\theta} + a^2} d\theta \right| \to 0$ as $R\to \infty$.
We procede as before, this integral is less than,
$\int_0^{\pi} \left| \frac{iR^2 e^{i\theta} e^{i(Re^{i\theta})} }{R^2 e^{2i\theta} + a^2} \right| d\theta = \int_0^{\pi}\frac{R^2 e^{-R\sin \theta}}{R^2 - a^2}d\theta$.
Here is where it gets interesting, in Example 39 we almost had the same integrating, we had an $R$ in the numerator, but here we have an $R^2$ in the numerator. We cannot just bound $e^{-R\sin \theta}$ by $1$ as in previous example, that will give us (after integrating), $\frac{R^2\pi}{R^2 - a^2} \to \pi \not = 0$. This is a problem, because we want to show the remainder converges to zero, and we failed to do that. We will need to find a better way to estimate (bound) that that integral. We simplify the integral first by a little,
$\int_0^{\pi} \frac{R^2 e^{-R\sin \theta}}{R^2 - a^2}d\theta = \int_0^{\pi/2} \frac{R^2 e^{-R\sin \theta}}{R^2 - a^2} d\theta + \int_{\pi/2}^{\pi} \frac{R^2 e^{-R\sin \theta}}{R^2 - a^2}d\theta$.
Now use the fact that $\sin \theta = \sin (\pi - \theta)$ and it will follow that,
$\int_0^{\pi/2} \frac{R^2 e^{-R\sin \theta}}{R^2 - a^2}d\theta = \int_{\pi/2}^{\pi} \frac{R^2 e^{-R\sin \theta}}{R^2 - a^2}d\theta$.
Thus,
$\int_0^{\pi} \frac{R^2 e^{-R\sin \theta}}{R^2 - a^2}d\theta = 2\int_0^{\pi/2} \frac{R^2 e^{-R\sin \theta}}{R^2 - a^2}d\theta$.
We have reduced the problem to showing,
$\int_0^{\pi/2} \frac{R^2 e^{-R\sin \theta}}{R^2 - a^2}d\theta \to 0$ as $R\to \infty$.
It would be helpful to draw the sine curve from $[0,\pi/2]$, note the line joining $(0,0)$ and $(\pi/2,1)$ lies below the curve so $\sin \theta \geq \frac{2}{\pi}\theta$ thus $-R\sin \theta \leq -\frac{2R}{\pi}\theta \implies e^{-R\sin \theta} \leq e^{-\frac{2R}{\pi}\theta}$.
This means,
$\int_0^{\pi/2} \frac{R^2e^{-R\sin \theta}}{R^2 - a^2} d\theta \leq \int_0^{\pi/2} \frac{R^2 e^{-(2R/\pi)\theta}}{R^2 - a^2} = \frac{R^2}{R(R^2-a^2)}\cdot A$ for some positive $A>0$ we do not care about.
This definitely converges to $0$ as $R\to \infty$, thus,
$\int_{-\infty}^{\infty} \frac{xe^{ix}}{x^2+a^2} dx = \pi i e^{-a} \implies \int_{-\infty}^{\infty} \frac{x\sin x}{x^2+a^2} dx = \pi e^{-a}$.

The last two lectures illustrate that using the semi-circle contour is a very useful method for attacking some integrals. Getting the answer is really easy with the residue theorem. It is the proof that the residue is in fact the same as integral by showing zero convergence of remainder term is what takes the most work. We talk about this in more detail in next lecture.
• Jun 1st 2008, 12:12 PM
ThePerfectHacker
Thus far in the previous examples we have shown how to calculate some integrals with semi-circle contours. We had to prove convergence, which was not terribly difficult but it takes time. We may wonder if there is any general result that we can use which tells us that the integral on the circle goes to zero. Before stating this result we will need a new notion.

We say $|f(z)| \to 0$ as $|z|\to \infty$ when $|f(z)|$ goes to zero when $|z|$ in every unbounded direction. In the complex plane there are many ways to approach $\infty$. We can travel along the positive axis of negative axis, or on the imaginary axis, if in all these cases we approach zero, no matter how we travel then we say $|f(z)| \to 0$ as $|z|\to \infty$, we write $\lim_{|z|\to \infty} |f(z)| = 0$. Thus, for example, $\left| \tfrac{1}{z} \right| \to 0$ as $|z|\to \infty$. We will also say $\lim_{|z|\to \infty} |f(z)| = 0$ with $\Im (z) > 0$. This means almost the same as before, but in this case we are restricting our attention to the upper half-plane. Thus, we are saying that traveling in the upper-half plane to $\infty$ makes $|f(z)|\to 0$. But what it does in the lower half-plane we really do not care, because we restricted our attention to the upper half-plane.

Defintion 14: Let $\Gamma_R$ be the curve $g(\theta) = Re^{i\theta}$ for $0\leq \theta \leq \pi$.

Theorem 23: Let $f(z)$ be meromorphic in the upper half-plane such that $\lim_{|z|\to \infty} |zf(z)| = 0$ in the upper half-plane. Then $\lim_{R\to \infty} \int_{\Gamma_R} f(z) dz = 0$.

Proof: Since the function is meromorphic in upper-half plane it means if we are far out enough ( $R$ is sufficiently large) we never hit a singularity with $\Gamma_R$, and so the integral is defined. But then, $\int_{\Gamma_R} f(z) dz = \int_0^{\pi} Re^{i\theta}f(Re^{i\theta}) d\theta$. But since $zf(z) \to 0$ as $|z|\to \infty$ it if we substitute $z=Re^{i\theta}$ then $Re^{i\theta}f(Re^{i\theta}) \to 0$ as $R\to \infty$ for any value $0\leq \theta \leq \pi$ because we are using the fact that $zf(z)\to 0$ along every path in the upper half-plane to $\infty$. Since $Re^{i\theta}f(Re^{i\theta}) \to 0$ it means $\int_0^{\pi} Re^{i\theta}f(Re^{i\theta}) d\theta \to 0$ as $R\to \infty$.

Example 41: Returning back to Example 38. We had to show $\int_{\Gamma_R}f(z) dz \to 0$ where $f(z) = \frac{1}{z^2+1}$. Instead of doing the bounding as in the example we can use Theorem 23 as a quicker approach. We need to show $|zf(z)| \to 0$ as $|z|\to \infty$ with $\Im (z) > 0$. Note, $|zf(z)| = \left| \frac{z}{z^2+1} \right| \leq \frac{|z|}{|z^2+1|} \leq \frac{|z|}{||z|^2-1|}$. Once $|z|$ gets large then $||z|^2-1| = |z|^2-1$ and so $\frac{|z|}{||z|^2 - 1|} \leq \frac{|z|}{|z|^2 - 1}$. Since the denominator is of larger degree the tends to zero as $|z|\to \infty$ with $\Im (z) > 0$. Now look at Example 39, we have to use using Theorem 23 to show $|zf(z)| \to 0$ as $|z|\to \infty$ with $\Im (z) > 0$ where $f(z) = \frac{e^{iz}}{z^2+a^2}$. Note, $|zf(z) |= \left| \frac{ze^{iz}}{z^2+a^2} \right| \leq \frac{|z||e^{iz}|}{||z|^2 - a^2|}$ but if $z=x+iy$ with $y>0$ then $e^{iz} = e^{-y}e^{ix}$ so $|e^{iz}| = |e^{-y}e^{ix}| \leq 1$. Thus, $|zf(z)| \leq \frac{|z|}{||z|^2-a^2|}$, once $|z|$ gets large enough it means $||z|^2-a^2| = |z|^2-a^2$. And so $|zf(z)| \leq \frac{|z|}{|z|^2 - a^2}$. And since denominator is larger degree than numberator it means $\lim_{|z|\to \infty} |zf(z)| = 0$ with $\Im (z) > 0$. And this justifies convergence.

Example 42: Let us examine Example 40. We will show that Theorem 23 fails to work. But if we try using the theorem we run into a difficultly. Here $f(z) = \frac{ze^{iz}}{z^2+a^2}$ but when we form the product $zf(z)$ we have a $z^2$ in the numerator and $z^2$ in the denominator. This limit will not go to zero. This means Theorem 23 does not help us prove that the the integral on $\Gamma_R \to 0$ as $R\to \infty$. But just because the theorem does not apply it does not mean the that integral on $\Gamma_R\to 0$. It might still mean it can still converge to zero, just that the theorem is not good enough for proving that. And indeed it does, as we proven in Example 40. It turns out that we need another theorem.

The next theorem is sometimes reffered to as Jordan's theorem.

Theorem 24: Let $f(z)$ be a meromorphic function in the upper half-plane so that $\lim_{|z|\to \infty}|f(z)| = 0$ with $\Im (z) > 0$. Let $a>0$. Then $\lim_{R\to \infty} \int_{\Gamma_R} e^{iaz} f(z) dz = 0$.

Proof: Since $f(z)$ is meromorphic in the upper-half plane it means when $R$ is large enough $\Gamma_R$ does not hit any singularities and so $\int_{\Gamma_R}e^{iaz}f(z) dz$ is defined. This integral, by definition, is $\int_0^{\pi} e^{ie^{i\theta}} f(Re^{i\theta}) Rie^{i\theta} d\theta = \int_0^{\pi} e^{-Ra\sin \theta + iRa\cos \theta} f(Re^{i\theta}) Rie^{i\theta} d\theta$.
Now, $\left| \int_0^{\pi} e^{-Ra\sin \theta + iRa\cos \theta} f(Re^{i\theta}) Rie^{i\theta} d\theta \right| \leq \int_0^{\pi} \left| e^{-Ra\sin \theta + iRa\cos \theta} f(Re^{i\theta}) Rie^{i\theta}\right| d\theta =$ $\int_0^{\pi} Re^{-Ra\sin \theta} f(Re^{i\theta}) d\theta$.
Note, $f(Re^{i\theta})\to 0$ since $|f(z)| \to 0$.
Also, $Re^{-Ra\sin \theta} \to 0$ as $R\to \infty$ because exponential go to zero faster than polynomials (and the exponent is negative since $a>0$ and $\sin \theta >0$ for $0<\theta < \pi$, which means $-Ra\sin \theta < 0$). Since the term inside the integral goes to zero as $R\to \infty$ it follows that the integral $\int_{\Gamma_R}e^{iaz}f(z) dz \to 0$.

Example 43: We can now complete Example 42. All we need to show is that $\frac{z}{z^2+a^2}\to 0$ as $|z|\to \infty$. And this is true as we seen in previous examples.

Example 44: We will compute the integral $\int_{-\infty}^{\infty} \frac{x^3\sin x}{(x^2+1)^2} dx$.
Let $\Gamma$ be the semi-circle contour. And define $f(z) = \frac{z^3 e^{iz}}{(z^2+1)^2}$.
This function has a pole of order $2$ as $z=i$. The residue is (details omitted) $\mbox{Res}_{z=i}f(z) = \frac{1}{4e}$.
Using our standard argument involving residue theorem it follows that,
$\int_{-R}^R \frac{x^3e^{ix}}{(x^2+1)^2} dx + \int_{\Gamma_R}\frac{z^3 e^{iz}}{(z^2+1)^2} dz = \frac{\pi i}{2e}$.
As we have seen in many examples before it remains to prove $\lim_{R\to \infty} \int_{\Gamma_R}\frac{z^3e^{iz}}{(z^2+1)^2} dz = 0$.
To show this we simply need to prove $\lim_{|z|\to \infty} \frac{z^3 }{(z^2+1)^2} = 0$ with $\Im (z) > 0$.
This follows because $\left| \frac{z^3}{(z^2+1)^2} \right| \leq \frac{|z|^3}{(|z|^2 - 1)^2}$.
And the denominator degree exceedes the numerator degree, thus,
$\lim_{R\to \infty}\int_{-R}^R \frac{x^3 e^{ix}}{(x^2+1)^2} dx = \frac{\pi i}{2e}$.
Take imaginary parts to get,
$\lim_{R\to \infty} \int_{-R}^R \frac{x^3 \sin x}{(x^2+1)^2} dx = \frac{\pi}{2e}$
Since $\int_{-\infty}^{\infty} \frac{x^3\sin x}{(x^2+1)^2} dx$ is convergent it means,
$\int_{-\infty}^{\infty} \frac{x^3\sin x}{(x^2+1)^2} dx = \frac{\pi}{2e}$.
(Try using Theorem 23, it will fail!)

Using Theorem 23 and 24 we can compute contour integrals with semi-circle contours without the need to justify convergence each time, thereby saving much time. What Theorem 23 is saying basically is that if the degree of the numerator is at least 2 less than degree of denominator then the integral along $\Gamma_R$ is zero. And what Theorem 24 is saying is basically if the numerator degree is at least 1 less than degree of denominator and it has a presence of an exponential term, then that is sufficient for it to go to zero.
• Nov 2nd 2008, 01:28 AM
Moo
You can delete this post after that if you think it's spoiling your thread.

However, I'd like to know why you don't talk about :
- conditions for a function to be holomorphic
Quote:

Originally Posted by Wikipedia
The term analytic function is often used interchangeably with holomorphic function, although the term analytic is also used in a broader sense of any function (real, complex, or of more general type) that is equal to its Taylor series in a neighborhood of each point in its domain. The fact that the class of analytic functions coincides with the class of holomorphic functions is a major theorem in complex analysis.

that is mostly Cauchy-Riemann equations

- exact and closed forms, which sometimes simplify the stuff you're calculating

- homotopies
• Nov 2nd 2008, 06:31 AM
ThePerfectHacker
Quote:

Originally Posted by Moo
However, I'd like to know why you don't talk about :
- conditions for a function to be holomorphic

I defined what analytic means all the way in the begining. I did not use those equations because what is the point? All that is necessary here is to demonstrate how contour integration works.

And yes I will delete your post when I will decide to finally continue with it. (Tongueout)