I am excited about starting my second tutorial on another subject. My first one was intended for less knowledgable people in math. This one is intended for more serious learners in mathematics. Since there are more people who use math for applied reasons and because it is easier to learn than pure mathematics I decided to go with the greater good and type one closer to applied math. It is assumed that the reader is well-versed in Multivariable Calculus, has a good understanding of multiple integration and partial differenciation.

This first lecture will be on Change of Variables. I choose this topic because it is usuallynotstudied in a standard Calculus sequence and because I happen to think it is very useful at times. In single variable calculus if you have the integral of the form $\displaystyle \int_a^b f(g(x)) \cdot g'(x) dx$ then by defining $\displaystyle u=g(x)$ this integral reduces to $\displaystyle \int_{g(a)}^{g(b)} f(u) du$, this is the all popular and useful Substitution Rule. In multivarible calculus we have a similar situation with changing variables. However, things get inevitably more complicated because we are no longer integrating on a line, we are integrating in a plane (or in space). One thing we need to be able to do is to figure out how the region of integration is transformed under a specific Change of Variable. We will adopt the following notation: $\displaystyle R_{xy}$ which will stand for the region of integration in the $\displaystyle xy$-plane. And we will use the notation $\displaystyle R_{uv}$ which will stand for the region of integration in the $\displaystyle uv$-plane. Note, we are first going to discuss double integrals, and then once we mastered them we could generalize this concept. The idea (which will be explained in much detail later) is that given a two variable function $\displaystyle f(x,y)$ we define two new variables $\displaystyle u=u(x,y)$ and $\displaystyle v=v(x,y)$ thereby transfering $\displaystyle f(x,y)$ to $\displaystyle f(u,v)$ which will hopefully be easier to deal with. There is just one thing we need to watch out for, we need to be sure that $\displaystyle u(x,y) \mbox{ and }v(x,y)$ are invertible functions in $\displaystyle R_{xy}$. The reason for this will become apparent, we will need to solve for $\displaystyle x,y$ in terms of $\displaystyle u,v$, which means we need to somehow invert the function.

The following theorem is due to Karl Jacobi, its proof is way beyond the scope of this text (in fact I do not know it), it can be found on most textbooks on Multivariable Analysis. Like all analysis theorems it needs to have well-behaved conditions for it to work, but to keep things simple for you and for myself we will not trouble ourselves with this details.

Change of Variables:Let $\displaystyle f$ be a continous function on region $\displaystyle R_{xy}$ and let a transformation be defined as $\displaystyle x=g(u,v) \mbox{ and }y=h(u,v)$ which is one-to-one (invertible). And let $\displaystyle R_{uv}$ be the image of region $\displaystyle R_{xy}$ under this transformation. Where $\displaystyle g,h$ are continously differenciable on $\displaystyle R_{uv}$ then,

$\displaystyle \iint_{R_{xy}} f(x,y) dA = \iint_{R_{uv}} f[g(u,v),h(u,v)] \cdot \left| \frac{\partial (x,y)}{\partial (u,v)} \right| dA$

Where, $\displaystyle \frac{\partial (x,y)}{\partial (u,v)}$ is called the "Jacobian" and is defined as,

$\displaystyle \left| \begin{array}{cc} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} \end{array} \right| = \frac{\partial x}{\partial u}\cdot \frac{\partial y}{\partial v} - \frac{\partial y}{\partial u} \cdot \frac{\partial x}{\partial v}$.

Wow! That looks dangerous. It looks so hard to use. Well, it is not so simple and many steps are required. But with practice this becomes useful and I hope you will start using it from time to time. The best way to demonstrate this theorem in power is through an example. Do not forget the absolute value in the Jacobian!

Example 1:Compute $\displaystyle \iint_{R_{xy}} \left( \frac{x-y}{x+y} \right)^3 dA$ where $\displaystyle R_{xy}$ is the triangular region $\displaystyle x+y=1$ in the first quadrant. This is really a complicated integral without the use of the Jacobian. But the form of the function suggests that we define $\displaystyle u=x-y$ and $\displaystyle v=x+y$. (Note these are invertible functions, so we can solve for other variables with them). Hence $\displaystyle x = \frac{v+u}{2}$ and $\displaystyle y=\frac{ v-u}{2}$. The next step is to compute the Jacobian: $\displaystyle \left| \begin{array}{cc} x_u & x_v \\ y_u & y_v \end{array} \right| = \left| \begin{array}{cc} 1/2 & 1/2 \\ -1/2 & 1/2 \end{array} \right| = \frac{1}{2}$. Now the trickest part is to find what the image of $\displaystyle R_{xy}$ is under this transformation. Well, since these are linear functions we would expect the new region to look similar to the old one, i.e. a triangle. Books tend to look where the vertices get mapped and connecting them with lines. I favor a more strict approach to this because if these are not such nice functions the new figure will be severly deformed. Look at the $\displaystyle R_{xy}$ below, note we can think of this region as the following system of inequalities $\displaystyle x\geq 0 \mbox{ and }y\geq 0 \mbox{ and }x+y \leq 1$. Substitute our new defined variables into this system to get $\displaystyle \frac{v+u}{2} \geq 0 \mbox{ and } \frac{v-u}{2} \geq 0 \mbox{ and } v \leq 1$. Thus, $\displaystyle v \geq - u \mbox{ and }v\geq u \mbox{ and }v\leq 1$. The resulting new region $\displaystyle R_{uv}$ is shown below. Hence the integral becomes $\displaystyle \frac{1}{2} \iint_{R_{uv}} \frac{u^3}{v^3} dA = \frac{1}{2} \int_0^1 \int_{-v}^v u^3v^{-3} du\, dv$. Now this is easily computable.

That is basically it. I just want to caution that if you get something like $\displaystyle x=v+u^2$ and $\displaystyle y=v^2+u$ then the resulting transformation will deform the shape, because it is not linear. Hence if you have a nice rectangle it might end up as parabolas. So the main difficulty is determining the new region $\displaystyle R_{uv}$. If you use the approach above by writing the old region as a system of inequalities then it should work well.

Example 2:Let $\displaystyle R_{xy}$ be described as $\displaystyle 0\leq x \leq 1$ and $\displaystyle 0\leq y\leq 1$, i.e. a square. Then the transformation $\displaystyle x=u+v$ and $\displaystyle y=u^2+v^2$ yield $\displaystyle 0\leq u+v\leq 1 \mbox{ and } 0\leq u^2+v^2 \leq 1$. This strange transformation is shown below.

The generalized Change of Variable formula is similar. Instead we have $\displaystyle f(x,y,z)$ we have $\displaystyle x=u(x,y,z)$ and $\displaystyle y=v(x,y,z)$ and $\displaystyle z=w(x,y,z)$. And then we need to find $\displaystyle R_{uvw}$, that is, the transformation in space which is much more difficult and furthermore the Jacobian still follows the same pattern but it becomes a $\displaystyle 3\times 3$ determinant.

Another Look at Polar Coordinates

The Jacobian provides a rigorous explanation of changing coordinates to polar form. Say we are integrating some function over the unit circle. In polar form we have $\displaystyle 0\leq \theta \leq 2\pi \mbox{ and }0\leq r\leq 1$. But when we substitute the integral we change $\displaystyle dx \, dy$ to $\displaystyle r\, dr\, d\theta$. Note a multiple of $\displaystyle r$ appears in the expression and Calculus students are warned not to forget it. But where does it come from? The answer is the polar transformation of the region. If we let $\displaystyle x = r\cos \theta$ and $\displaystyle y=r\sin \theta$ then $\displaystyle R_{\theta r} = \{(\theta,r)|0\leq \theta \leq 2\pi \mbox{ and }0\leq r \leq 1\}$.

That is we get a rectangle in the $\displaystyle \theta r$-plane. Now let us compute the Jacbobian $\displaystyle \left| \begin{array}{cc} -r\sin \theta & \cos \theta \\ r\cos \theta & \sin \theta \end{array} \right| = -r\sin^2 \theta - r\cos^2 \theta = -r$. But remember we chose the absolute value of the Jacobian, thus, $\displaystyle |-r|=r$. And that is where this factor appears from.

Similarly in Spherical Coordinate the Change of Variable: $\displaystyle x = \rho\cos\theta \sin \phi \mbox{ and }y=\rho \sin \theta \sin \phi\mbox{ and } \sin z=\rho \cos \phi$. Its absolute Jacobian is $\displaystyle \left| \frac{\partial (x,y,z)}{\partial (\rho,\theta,\phi)} \right| = \rho^2 \sin \phi $. But the messy determinant details are omitted. This example shows where this factor appears after a conversion to Spherical Coordinates.

Integrating Over an Ellipse

Integrating over a circle centered at the origin is ideal at times if converted to polar form. But what can we do if the region of integration is an ellipse $\displaystyle \frac{x^2}{a^2}+\frac{y^2}{b^2} \leq 1 \mbox{ with }a,b>0$? Can we nicely write this in polar form? The answer is yes (otherwise I would not mention it). Remember we used the Jacobian to simplify the integrand. But we can use a different tactic. Instead of simplifing the integrand we simplify the region of integration. Define $\displaystyle u=\frac{x}{a} \mbox{ and }v=\frac{y}{b}$. That means $\displaystyle x=au \mbox{ and }y=bv$, then the new region shall be $\displaystyle u^2+v^2\leq 1$ a unit circle at the origin, excellent! And what about the Jacobian? $\displaystyle \frac{\partial(x,y)}{\partial(u,v)} = \left| \begin{array}{cc} a & 0 \\ 0 & b\end{array} \right| = ab$.

Example 3:Consider $\displaystyle \iint_{R_{xy}} x^2+y^2 dA$ where $\displaystyle R_{xy}$ is the ellipse $\displaystyle \frac{x^2}{2^2}+\frac{y^2}{1^2} \leq 1$. Then the variable subsitution $\displaystyle x = 4u \mbox{ and } v =y$ will transform the integral into $\displaystyle \iint_{R_{uv}} [16u^2+v^2 ]\cdot (4\cdot 1) dA$. Which is easier to integrate, because we can easily express $\displaystyle R_{uv}$ in polar form unlike in the beginning.

Integrating Over a Shifted Circle

Same idea, instead we have a disk $\displaystyle (x-x_0)^2+(y-y_0)^2 \leq r^2 \mbox{ with }r>0$ as $\displaystyle R_{xy}$. If we define $\displaystyle u=x-x_0 \mbox{ and }v=y-y_0$ then this transforms the region of integration to $\displaystyle u^2+v^2\leq r^2$ which is a disk centered at the origin, i.e. much more pleasant to integrate. Thus, the Change of Variable was $\displaystyle x = u+x_0 \mbox{ and }y=v+y_0$. Note the Jacobian is, $\displaystyle \frac{\partial (x,y)}{\partial (u,v)} = \left| \begin{array}{cc} 1 & 0 \\ 0& 1 \end{array} \right| = 1$. Hence we can simply preform a parallel coordinate shift without changing anything in the integral, as expected.

Example 4:Consider $\displaystyle \iint_{R_{xy}} f(x,y) dA$ where $\displaystyle R_{xy}$ is the disk $\displaystyle (x-1)^2+(y+1)^2 \leq 4$. The Change of Variables $\displaystyle x = u+1 \mbox{ and }v=v-1$ transforms the integral into $\displaystyle \iint_{R_{uv}} f(u+1,v-1)dA$ where $\displaystyle R_{uv}$ is the disk $\displaystyle u^2+v^2 \leq 4$.

Excersices

~~~

1)$\displaystyle R_{xy}=\{|x|\leq 1 \mbox{ and }|y|\leq 1\}$, compute: $\displaystyle \iint_{R_{xy}} (x-y)^5(x+y)^{10} dA$

2)$\displaystyle R_{xy}$ same as Example 1, compute: $\displaystyle \iint_{R_{xy}} (x+y)e^{x^2-y^2}dA$

3)Sometimes it might be convinent to rotate the region of integration. The reader probably knows that the rotation formula by angle $\displaystyle \theta$ is given by:

$\displaystyle \left\{ \begin{array}{c} x=u\cos \theta - v\sin \theta \\ y= u\sin \theta + v\cos \theta \end{array} \right\}$.

Compute the Jacobian.