I've been learning about Polynomial regression from Internet resources. I came across this video:
which derives the matrix everyone uses. I think I understand most it but I have a couple of questions.
Overview of derivation
Link to a shorter derivation at the bottom or the overview.
If you can find and such that:
is minimised. You will have found the polynomial equation of the form:
with the least squared error.
To do this the partial derivative is taken for and which gives:
Setting the derivatives to 0 minimises .
After expanding and simplifying you're left with:
Which leaves some linear simultaneous equations which can solved.
Shorter alternate explanation:
When the partial derivative is taken, the square used to find the least square error is changed to a multiply by two, which is then cancelled out later. As far as I can tell you would get the same answer trying to find the least cubed error or the least square root error or any (non zero) error. Does this mean that the resulting function would be the same for any (non zero) powered error?
If it is the case I can't quite wrap my head around how that would work. I would have thought the higher the power the more outliers would pull the function towards themselves.
It's never explained in the video, but why does setting the partial derivatives to 0 minimise ?
I think I've got this one.
All three must always be greater or equal to 0. For it to be equal to 0 the entire data set needs to be populated by 0s or have no data. In either case polynomial regression probably wouldn't be used. Thus it is always at a minima.
Thanks in advance,