# Thread: Help with non-linear least squares

1. ## Help with non-linear least squares

I'm trying to find least squares coefficients using the Gauss-Newton method (using scilab as my numerical analysis software)

Given the following data vectors:

x = [57.9 108.2 149.6 227.9 778.3 1429.4] ' ;
y = [0.2 0.6 1.0 1.9 11.9 29.5] ' ;

(The ' sign means transposed in scilab notation, so they are column vectors)

I'm trying to adjust the data to the following equation:

$y = ax^b$
(you will find this to be Kepler's third law on planetary motion)

,where a and b are the coefficients to find.

I'm using the following code to find $\theta = (a, b)$, and an initial value of: $\theta_0 = (0.000426, 1.53917)$

So here's the problem. In the first iteration, the Jacobian grows to be a very badly scaled matrix:

note that the left column is $x^p$, which makes for a huge number compared to the right column, making the matrix very badly scaled, so i never get to the solution.

I've tried everything from QR decomposition to logarithmic scaling, but nothing seems to work for me.

Any help will be much appreciated.

2. Grinding through a non-linear least squares problem efficiently isn't a really a statistical problem per se. That being said, I'm pretty sure you can find this via iteratively re-weighted least squares, or else just straight-forward Newton's Method.

If you want to do it by hand, that's all I have to offer, but if my thinking is correct the least squares solution should be the same as the one you get if you fit a GLM with gaussian errors and a log link function, which is something I can do quickly, so I did. The solution I got was

$\theta = (0.00056575, 1.49509)$

3. If you're doing least squares, how about the following. Let $\{x_{j},y_{j}\}$ be your data set, and define

$\displaystyle E:=\sum_{j=1}^{N}(y_{j}-ax_{j}^{b})^{2}.$

We compute the partial derivatives of $E$ with respect to $a$ and $b,$ hoping that we'll be able to set them equal to zero, and thus solve for $a$ and $b.$ We get

$\displaystyle\frac{\partial E}{\partial a}=-2\sum_{j=1}^{N}\left[(y_{j}-ax_{j}^{b})x_{j}^{b}\right],$ and

$\displaystyle\frac{\partial E}{\partial b}=-2a\sum_{j=1}^{N}\left[(y_{j}-ax_{j}^{b})x_{j}^{b}\ln(x_{j})\right].$

Setting

$\displaystyle\frac{\partial E}{\partial a}=0$ yields that

$\displaystyle a=\frac{\sum_{j=1}^{N}y_{j}x_{j}^{b}}{\sum_{j=1}^{ N}x_{j}^{2b}}.$

Setting

$\displaystyle\frac{\partial E}{\partial b}=0$ yields that

$\displaystyle\sum_{j=1}^{N}y_{j}x_{j}^{b}\ln(x_{j} )-\left[\frac{\sum_{k=1}^{N}y_{k}x_{k}^{b}}{\sum_{k=1}^{N} x_{k}^{2b}}\right]\sum_{j=1}^{N}x_{j}^{2b}\ln(x_{j})=0.$

Solve this second equation numerically for $b$, then plug the result into the equation for $a,$ and you're done.

Caveat: I have no idea how numerically stable your algorithm for solving for $b$ is going to be, but it does seem straight-forward that once you have solved for it, $a$ should be straight-forward.

Does this make sense?

And yes, theodds is right: this question probably belongs in the Analysis, Topology, and Differential Geometry forum. Or you could also put it in either Advanced Applied Math or in Other Advanced Topics.

4. First of all, thanks for your replies.

You may be right about the forum placing. To be honest I didn't really know where to put the question so, sorry for that.

theodds, the values you came up with seem pretty close to the ones I'm looking for. I'm not really familiar with the method you're mentioning, but I'll see if i can get the same results.

Ackbeet, I followed some pretty similar steps to get to the the initial values I mentioned above, but they come out with significant errors, that's why I went for the Gauss-Newton method.

Anyways, thanks a lot for your replies.

Cheers.