By definition the derivative is additive, and it trivially follows that the derivative "moves in" to finite sums: $\displaystyle D[\sum_{i=0}^n f_i](a) = \sum_{i=0}^n Df_i(a) $.

What I don't fully understand is when the derivative commutes withinfinitesums:

$\displaystyle D[\sum_{i=0}^{\infty} f_i](a) = \sum_{i=0}^{\infty} Df_i(a)$. (1)

We use this property all of the time in Fourier analysis, study of solutions to PDEs, etc. But I don't think it holds in general, even when all sums involved converge absolutely. For instance, consider

$\displaystyle f_i = \frac{\sin[(i+1)^2 x]}{(i+1)^2} - \frac{\sin[i^2 x]}{i^2}$ for i > 0 and $\displaystyle f_0 = \sin(x)$.

Then for all x the sum of the f_i converges absolutely to 0. But

$\displaystyle D[0](0) = 0 \neq 1 + \sum_{i=1}^\infty (1-1) = 1.$

What, then, are the conditions necessary for the derivative to commute with an infinite sum? Is absolute convergence of both sums in equation 1 in a neighborhood of a (and not just at a) enough? If so, how do I prove it?