Suppose you conduct an experiment in which chemical concentration is measured 15 times in succession. This experiment is then repeated in different environments n-1 times. n might be as much as 30.



Next suppose there exist two models/functions for predicting the concentration following the start of an experiment: A (3 parameters) and B (6 parameters). Optimisation techniques are used to fit each model to each of the n data series and a measure of fit between each prediction and data series is found (e.g. Euclidian distance between each pair of 15 prediction points and 15 real data points).


What I would like to do is to statistically test that model A is 'good enough' for modelling such experiments, either through merely assessing the performance of model A given the experimental data and associated best model fits or through constructing a test that compares the performance of A and B.


I am rather confused as to how best to achieve the above. Chi-Squared tests seem to be used to assess goodness of fit but only for univariate categorical datasets. Would likelihood ratio tests be good choice here? I'm not sure how I would go about evalutating likelihood(predictions | (model_1 & data (& params?)

Given my lack of I'm tempted to simply select one of the two models based on the distribution of the sum of squared errors per model.

Any comments or suggestions would be gratefully received.

Regards,

Will Furnass