So the equation for comparing two multiple regression models using an F-test is
When we're calculating this to compare our models, do we use the Sum of Sqaures for the model, or the Sum or Squares for the Error?
So the equation for comparing two multiple regression models using an F-test is
When we're calculating this to compare our models, do we use the Sum of Sqaures for the model, or the Sum or Squares for the Error?
SS1 is the sum of squares of the factor whose effect you want to compare; and SS2 is the sum of squares of errors. Both SS1 and SS2 are calculated from the model. For example, if your model is and you want to test : all 's are equal; then your SS1 is and SS2 is
You can actually do this by taking the difference in either the model sum of squares or the error sum of squares - the order flips if you use model SS - HOWEVER the error sum of squares (of the larger model) needs to be in the denominator.
To be consistent with the way you've drawn up the test statistic, the SS terms are error sums of squares of their respective models.
This doesn't give a test of anything. Taking a difference in those two sums of squares doesn't give a sum of squares - it doesn't even give something that is positive w.p. 1. The question is about testing nested models in regression, not testing for factor effects in anova.