Hi!

I need some help in a question that is probably very simple but I forgot already many things from my university statistics studies.

I need to fit a measured spectrum with a linear combination of known curves. I.e. I have measurements of f(lambda) and the corresponding gaussian errors ef(lambda) at about 1500 points in lambda, which needs to be fitted with a model function f_mod(lambda) = sum_i=1^10 c_i * g_i(lambda).
Here, the g_i(lambda) curves are known/given and the c_i coefficients are to be fitted. So far it's pretty simple. The problem is that I do not know if I need all 10 curves to fit the data or 3 of them is already sufficient. I need to test all curves if they are significantly present in the data. Since the data have fairly low signal-to-noise, I would be interested which of the 10 curves can be significantly identified in the data _given the uncertainties in the measured data_. All g_i(lambda) curves are linearly independent of each other, no degeneracy is present in the curves.

I know that the goodness of fit can be tested with a simple chisquare test. This means that for a given confidence level and degree of freedom I can calculate the chisquare value below which the actual chisquare of my fit should fall to be an acceptable fit. However, the chisquare test assumes that all 10 curves is present in the data, right? Is there a way to check the significance of the presence of the curves in the data with chisquare test? I know that an F-test can be used for such purpose by measuring the decrease of the chisquare if a significant component is added to the fit, but I'm curious if it can be done with a chisquare test.

Thanks a lot for your help in advance!!!