Suppose the function f(x) is defined on the interval (-L,L) and is finite.
We want to approximate f(x) by a finite series of (co-)sines:
where parameters , , and are freely variable.
The mean square error of the approximation is defined as:
a) Show that the choice: , and , where , , and are the Fourier coefficients of f, minimizes
I think I can figure the rest of it out when I get this, so I won't bother typing it... I have never worked with the mean error before so I'm a bit confused as to how to start. Any help would be appreciated.