[SOLVED] Deriving a Polynomial Regression Model

Hi,

I've been learning about Polynomial regression from Internet resources. I came across this video: Polynomial Regression Model: Derivation: Part 1 of 2 - YouTube

which derives the matrix everyone uses. I think I understand most it but I have a couple of questions.

**Overview of derivation**

Link to a shorter derivation at the bottom or the overview.

If you can find and such that:

is minimised. You will have found the polynomial equation of the form:

with the least squared error.

To do this the partial derivative is taken for and which gives:

Setting the derivatives to 0 minimises .

After expanding and simplifying you're left with:

Which leaves some linear simultaneous equations which can solved.

Shorter alternate explanation:

Least-Squares Parabola

**Questions**

**Q1**

When the partial derivative is taken, the square used to find the least square error is changed to a multiply by two, which is then cancelled out later. As far as I can tell you would get the same answer trying to find the least cubed error or the least square root error or any (non zero) error. Does this mean that the resulting function would be the same for any (non zero) powered error?

If it is the case I can't quite wrap my head around how that would work. I would have thought the higher the power the more outliers would pull the function towards themselves.

**Q2**

It's never explained in the video, but why does setting the partial derivatives to 0 minimise ?

I think I've got this one.

All three must always be greater or equal to 0. For it to be equal to 0 the entire data set needs to be populated by 0s or have no data. In either case polynomial regression probably wouldn't be used. Thus it is always at a minima.

Thanks in advance,

Matthew

Re: Deriving a Polynomial Regression Model

Q1

its differenciation, so the power reduces by 1. if you used a cubic error you would be left with 3 * (some quadratic)=0 which will not necessarily end up at the same answer as 2*(some linear function)=0.

Q2

im not sure i follow your reasoning here. Are you trying to prove that the stationary point you just solved for could never be a maximum?

Re: Deriving a Polynomial Regression Model

Thanks for the reply.

**Q1**

Sorry, I don't think I explained this well enough. Using just one of the equations, if we start with:

and take the partial derivative of we get:

From we get:

From here we can now divide both sides by 2 or whatever the power was in the first equation to get:

The same can be (and is) done to each of the equations.

**Q2**

At this point I should probably come clean: I'm not really a mathematician (more of a scientist) so it could be completely wrong. You're nearly right about what I was trying to do though. I was trying to show that it was a local minima (and thus actually minimising the squared error) using the second derivative test. Second derivative test - Wikipedia, the free encyclopedia

If it's wrong then I'd really like to know the actual method.

Thanks,

Matthew

Re: Deriving a Polynomial Regression Model

Q1

for a cubic error the equvalent would be:

This does not have the same solutions as the normal equations in post #1. in fact, it basically says that the sum of squared residuals is 0; which has no solutions unless all data points are exactly on the predicted line.

Q2 your first post was right, i didn't read it properly ;)

Re: Deriving a Polynomial Regression Model

Ah, yes. You're right. I feel a little bit stupid, but at least I got the answer.

Thanks. :)