I am working on a software platform for testing certain types of apparatus. Most of it is pretty straight forward. One of the data sources is an audio card. I can read this in the tester and end up with a vast number of data points representing the incoming signal sampled at whatever rate.
The existing test just checks to see if there is a signal present. It has worked well enough until now. However, my group has been asked to improve the test.
What we want to do is have the audio generator generate 2 or 3 known frequencies, and then test the samples to see if all the expected frequencies are there.
I've fiddled with some approaches that try to subtract sinusoids of the expected frequencies from the sample. My rationale is that if the input is 4kHz and 7kHz, if I subtract a 4kHz sine and 7kHz sine from the samples, I should end up with roughly nothing. I have phase and amplitude problems however, I can't control either in the generator, at least not sufficiently to make this work.
Is there an easy way to do this bearing in mind my mathematics is limited.