Integration involving the minimization of signal transmission error
I'm studying the transmission of signals and have come across an integration that I don't understand. I understand the mechanics of the integration,
but not the purpose for solving it the way my textbook does.
Background: It is used to solve for a constant c which is a multiplier of signal error
function. The integration gives c, such that the signal error will be minimized.
The integral is: c = [1/pi] Definite integral (limits 0 to 2pi) f(t) sin(t) dt, where f(t)
represents the signal and sin t, an approximation of the signal.
My text evaluates as follows:
[1/pi] (Integral (limits 0 to pi) sint t dt + Integral (limits pi to 2pi) -sin t dt) = 4/pi
Again, I understand the integration, but not why it is broken into the sum of the 2 integrals. What is the reasoning behind this process? Thanks for the help.