commuting of derivative and infinite sums

Jan 2010
By definition the derivative is additive, and it trivially follows that the derivative "moves in" to finite sums: \(\displaystyle D[\sum_{i=0}^n f_i](a) = \sum_{i=0}^n Df_i(a) \).

What I don't fully understand is when the derivative commutes with infinite sums:

\(\displaystyle D[\sum_{i=0}^{\infty} f_i](a) = \sum_{i=0}^{\infty} Df_i(a)\). (1)

We use this property all of the time in Fourier analysis, study of solutions to PDEs, etc. But I don't think it holds in general, even when all sums involved converge absolutely. For instance, consider
\(\displaystyle f_i = \frac{\sin[(i+1)^2 x]}{(i+1)^2} - \frac{\sin[i^2 x]}{i^2}\) for i > 0 and \(\displaystyle f_0 = \sin(x)\).
Then for all x the sum of the f_i converges absolutely to 0. But

\(\displaystyle D[0](0) = 0 \neq 1 + \sum_{i=1}^\infty (1-1) = 1.\)

What, then, are the conditions necessary for the derivative to commute with an infinite sum? Is absolute convergence of both sums in equation 1 in a neighborhood of a (and not just at a) enough? If so, how do I prove it?


MHF Helper
Apr 2005
No, it isn't absolute convergence you need. You need to have uniform convergence in order to differentiate inside the sum.
Jan 2010
Uniform convergence of what? I would assume the partial sums of the f_i to f, but in my concrete example above, \(\displaystyle \sum_{i=0}^n f_i\) does uniformly converge to 0, unless I'm missing something. (After all, \(\displaystyle \left| \sum_{i=0}^n f_i(x)\right| \leq \frac{1}{(n+1)^2}\).)