1. ## Multiple regression coefficents

Hi there, im currently working on a research project that requires making usage of multiple regression.

I am really struggling to find any proofs or methods of derivation for the coefficients used in the linear regression line. I know for normal simple regression we use least means squared which then allows for two simple forumulas to find say B0 and B1. But in the multiple case, where we may have B0, B1, B2,....

how do you find these?

Many online sources just seem to describe it as ugly, and skip this part out. or resort to a computing programme.

Many Thanks

Sam

2. Originally Posted by saambre
Hi there, im currently working on a research project that requires making usage of multiple regression.

I am really struggling to find any proofs or methods of derivation for the coefficients used in the linear regression line. I know for normal simple regression we use least means squared which then allows for two simple forumulas to find say B0 and B1. But in the multiple case, where we may have B0, B1, B2,....

how do you find these?

Many online sources just seem to describe it as ugly, and skip this part out. or resort to a computing programme.

Many Thanks

Sam
For ordinary least squares regression, it isn't particularly ugly. You just need some background in linear algebra. The linear model is $
Y = X\beta + \epsilon
$

where Y is an (n x 1) random vector, X is an (n x p) matrix of known coefficients, and beta is a (p x 1) vector of unknown parameters; epsilon is an (n x 1) vector of random errors (mean 0, typically with iid normal components).

A reasonable goal under these circumstances is to minimize
$Q(\beta) = \|Y - X\beta\|^2$
where $\|\cdot\|^2$ is the Euclidean length of a vector squared. It can be shown that the value of $\beta$ that satisfies this is given by
$
\hat{\beta} = (X'X)^{-1} X'Y
$

This is the least squares estimate of beta. The proof is not very difficult if you have the appropriate machinery from matrix algebra (projections and so forth); you can also show it with a little bit more effort via matrix calculus (less background required) - set the gradient Q(beta) to zero and show that the Hessian is positive-definite.