One of the thing that bothered me for a long time with the $\displaystyle \bold{d}x$ appearing in the end of the integral. What is it for? And why was it but there? People told me that it is because we want to show what we variable we are integrating, but it is clear without it even being there. Behold! I have dreamt a dream and I have a revelation! Behold! It was Riemann he told me the answer. Behold! It was the Riemann-Stieltjes Integral.

I want to explain what it is, it will make the notion of why we put a $\displaystyle \bold{d}x$ in the integral a lot more clearer.

First let us begin with a simpler question. What is a Riemann Integral? If you taken a basic course in analysis you would know there are two ways to define it. The classical Riemann definition which is a little discussed in a Calculus course also, and another one (which is exactly the same thing) developed by Darboux. Since Riemann's definition is more elementary (but not as neat) let us do that definition.

Definition:Let $\displaystyle f$ be a bounded function on a closed interval $\displaystyle [a,b]$. We say that $\displaystyle f$ is integrable on this interval when there exists a real number $\displaystyle I$ such that: for any $\displaystyle \epsilon > 0$ there exists a $\displaystyle \delta >0$ so that for any partition $\displaystyle P= \{ a=x_0 < x_1< ... < x_{n-1}<x_n=b \}$ satisfing $\displaystyle \text{mesh}(P) = \max_{1\leq k\leq n} \ \{x_{k} - x_{k-1} \} < \delta$ we have that $\displaystyle \left| I - \sum_{i=1}^n f(t_k)(x_k-x_{k-1}) \right| < \epsilon$ where $\displaystyle t_k$ isanypoint chosen on $\displaystyle [x_{k-1},x_k]$-subinterval.

Basically the definition is saying that we can make the finite sums (approximating areas) as close as we want to thetrue valuewhich we call $\displaystyle I$* as long as the partition $\displaystyle P$ of the interval is fine (or thin) enough. And note how much freedom we have, it saysfor any partitionand there are infinitely many, and it saysany point in sub-intervaland again there are infinitely many. So there is so much freedom with these finite Riemann sums.

So we know that if $\displaystyle f(x) = x \mbox{ on }[0,1]$ then to show that $\displaystyle \int_0^1 f = \frac{1}{2}$ we need to show that the number $\displaystyle I = \frac{1}{2}$ is this number we need to satisfy the definition given above.

Note, that is what the fundamental theorem of Calculus is doing. Instead of going through all that difficult definition, it says that if we can find the anti-derivative it is that value $\displaystyle I$ that we are looking for.

If you think the Riemann integral definition is complicated just look at the Riemann-Steiljes integral definition. Now the Riemann-Steiljes integral is more general. It is an integral with respect to another function. Before stating their definition there is just one technical detail.

Definition:Let $\displaystyle g:[a,b]\mapsto \mathbb{R}$ is abounded variationwhen there exist a constant $\displaystyle M>0$ so that if for any partition $\displaystyle P = \{ a=x_0<...<x_n = b\}$ we have that $\displaystyle \sum_{k=1}^n |g(x_k)-g(x_{k-1})| \leq M$.

Now we can state the definition (which might look monstrous in the beginning).

Definition:Let $\displaystyle f$ be a bounded function on $\displaystyle [a,b]$ and $\displaystyle g$ be a bounded variation on $\displaystyle [a,b]$. We say $\displaystyle f$ isRiemann-Steiljes integration with respect to $\displaystyle g$when there exists a real number $\displaystyle I$ such that: for any $\displaystyle \epsilon > 0$ there exists $\displaystyle \delta > 0$ so that for any partion $\displaystyle P = \{a = x_0<x_1<...<x_n = b\}$ satisfing $\displaystyle \text{mesh}(P) < \delta$ we have that $\displaystyle \left| I - \sum_{k=1}^n f(t_k)[g(x_{k}) - g(x_{k-1})] \right| < \epsilon $ where $\displaystyle t_k$ are any points in the $\displaystyle [x_k-x_{k-1}]$ sub-interval. We call this distinguished number $\displaystyle I$ to be theRiemann-Steiljes integral of $\displaystyle f$ on $\displaystyle [a,b]$ with respect to $\displaystyle g$. And write $\displaystyle I = \int_a^b f \bold{d}g$.

Now why is this a generalization? Because if $\displaystyle g(x) = x$ then it is the standard Riemann integral! And that means with respect to $\displaystyle x$ we would write $\displaystyle \int_a^b f \bold{d}x$. And that is where the $\displaystyle \bold{d}x$ comes from.

In fact it turns out that if $\displaystyle f$ is continous and $\displaystyle g$ is smooth (continously differenciable) then it is a bounded variationand:

$\displaystyle \int_a^b f \bold{d}g = \int_a^b fg' $. Where the RHS is the standard Riemann Integral.

So not only does this explain the $\displaystyle \bold{d}x$ part it also explain the differencial part of a function.

For example,

$\displaystyle \int_0^\pi \sin x d(x^2+x) = \int_0^{\pi} \sin x (2x+1) dx$.

By the Riemann-Steiljes Integral.

Maybe you find that interesting, that is why I posted it.

*)It can be easily show that if $\displaystyle I_1,I_2$ are any possible real values for then Riemann integral then $\displaystyle I_1 = I_2$. Meaning there is only one such possible value $\displaystyle I$ and wedefineit to be the integral $\displaystyle \int_a^b f$.