I'm writing a program that uses a cubic curve, and I'm trying to find the 4 formulae to give me a, b, c and d for ax3 + bx2 + cx + d = 0 for a set of data points x and y. Any help would be appreciated,

Thanks,

J.J.

Printable View

- Sep 7th 2009, 01:16 AMJ.J.Cubic formula
I'm writing a program that uses a cubic curve, and I'm trying to find the 4 formulae to give me a, b, c and d for ax3 + bx2 + cx + d = 0 for a set of data points x and y. Any help would be appreciated,

Thanks,

J.J. - Sep 7th 2009, 05:33 AMCaptainBlack
Lets suppose what you really mean is that you have a set of data points that you want to fit to a cubic:

$\displaystyle y=ax^3 + bx^2 + cx + d $

If there are four points when you plug them into the above you get a set of four simultaneous linear equations for $\displaystyle a,\ b,\ c,\ d$.

If there are more than four points the look up linear regression.

CB - Sep 7th 2009, 05:39 AMJ.J.Cubic Formula
What I'm trying to find are formulae like these ones:

(These are the formulae to get a, b + c for ax2 + bx + c = 0 given a set of x and y data points)

a = (Σy*Σx*Σx3 - Σxy*n*Σx3 - Σy*(Σx2 )2

+ Σxy*Σx*Σx2 + Σx2y*n*x2 – Σx2y*(Σx)2)

/ (n*Σx2*Σx4 – (Σx)2*Σx4 - n*(Σx3)2 + 2*Σx*Σx2*Σx3 – (Σx2)3)

b = (Σxy*n*Σx4 - Σy*Σx*Σx4 + Σy*Σx2*Σx3

- Σx2y*n*Σx3 - Σxy*(Σx2)2 + Σx2y*Σx*Σx2)

/ (n*Σx2*Σx4 – (Σx)2*Σx4 - n*(Σx3)2 + 2*Σx*Σx2*Σx3 – (Σx2)3)

c = (Σy*Σx2*Σx4 - Σxy*Σx*Σx4 - Σy*(Σx3)2

+ Σxy*Σx2*Σx3 + Σx2y*Σx*Σx3 – Σx2y*(Σx2)2)

/ (n*Σx2*Σx4 – (Σx)2*Σx4 - n*(Σx3)2 + 2*Σx*Σx2*Σx3 – (Σx2)3) - Sep 7th 2009, 07:29 AMHallsofIvy
Apparently, you are looking for the "least squares" cubic that goes "closest" to those points.

Think of it this way: you are looking for a, b, c, d such that every point $\displaystyle (x_1, y_1), (x_2, y_2), \cdot\cdot\cdot, (x_n, y_n)$ satisfy

$\displaystyle y_1= ax_1^3+ bx_1^2+ cx_1+ d$,

$\displaystyle y_2= ax_2^3+ bx_2^2+ cx_2+ d$

$\displaystyle \cdot\cdot\cdot$

$\displaystyle y_n= ax_n^3+ bx_n^2+ cx_n+ d$

We can write that as the matrix equation, Ax= b:

$\displaystyle \begin{bmatrix}x_1^3 & x_1^2 & x_1 & 1 \\ x_2^3 & x_2^2 & x_2 & 1 \\\cdot & \cdot & \cdot & \cdot \\\cdot & \cdot & \cdot & \cdot \\\cdot & \cdot & \cdot & \cdot \\x_n^3 & x_n^2 & x_n & 1\end{bmatrix}$$\displaystyle \begin{bmatrix}a \\ b \\ c \\ d\end{bmatrix}= \begin{bmatrix} y_1 \\ y_2 \\ \cdot\\\cdot\\\cdot\\ y_n\end{bmatrix}$.

Now A is a linear transformation from $\displaystyle R^4$ to $\displaystyle R^n$ and so maps $\displaystyle R^4$ into a 4 dimensional**subspace**of $\displaystyle R^n$. That equation has a solution only if b happens to lie in that subspace (if the points happen to lie on the graph of a cubic). If not then the best we can do is to find the x in $\displaystyle R^4$ that is**closest**to b. That would mean that if x is such, the vector from b to Ax, Ax- b, must be perpendicular to that subspace and so perpendicular to every vector in it. Since every vector in that subspace can be written as Av for some v in $\displaystyle R^4$ we must have <Av, Ax- b>= 0 where "< , >" is the inner (dot) product in $\displaystyle R^n$.

Now, some theory. If A is a linear transformation from vector space U to vector space V, then its "adjoint", A*, is the linear transformation from V back to U such that, for all u in U, v in V, <Au, v>= <u, A*v> where the inner product in V and on the right in U. Here, <Av, Ax-b>= <v, A*(Ax-b)>= 0. But now that inner product on the right is in $\displaystyle R^4$ and v can be any vector in $\displaystyle R^4$ since the inner product of A*(Ax- b) with**any**vector in $\displaystyle R^4$ (in particular with itself) is 0, we must have A*(Ax- b)= A*Ax- A*b= 0 or A*Ax= A*b and so, if the inverse exists, $\displaystyle x= (A^*A)^{-1}A^*b$.

$\displaystyle (A^*A)^{-1}A^*$ (if it exists)- is the "generalized inverse" of A. It can be shown that if A has an inverse, so does A* and $\displaystyle (A*A)^{-1}= A^{-1}A*^{-1}$ so that $\displaystyle (A*A)^{-1}A*= A^{-1}A*^{-1}A*= A^{-1}$. But even if A does not have an inverse (if it is not square as in this case) (A*A)^{-1} may exist.

Here, $\displaystyle A= \begin{bmatrix}x_1^3 & x_1^2 & x_1 & 1 \\ x_2^3 & x_2^2 & x_2 & 1 \\\cdot & \cdot & \cdot & \cdot \\\cdot & \cdot & \cdot & \cdot \\\cdot & \cdot & \cdot & \cdot \\x_n^3 & x_n^2 & x_n & 1\end{bmatrix}$ so $\displaystyle A*= \begin{bmatrix}x_1^3 & x_2^3 & \cdot\cdot\cdot\ x_n^3 \\ x_1^2 & x_2^2 & \cdot\cdot\cdot x_n^2 \\ x_1 & x_2 & \cdot\cdot\cdot x_n \\ 1 & 1 & \cdot\cdot\cdot 1 \end{bmatrix}$

and that $\displaystyle A^*A= \begin{bmatrix}\sum x_n^6 & \sum x_n^5 & \sum x_n^4 & \sum x_n^3 \\ \sum x_n^5 & \sum x_n^4 & \sum x_n^3 & \sum x_n^2 \\ \sum x_n^4 & \sum x_n^3 & \sum x_n^2 & \sum x_n \\ \sum x_n^3 & \sum x_n^2 & \sum x_n & n\end{bmatrix}$.

Also $\displaystyle A^*b= \begin{bmatrix}\sum x_n^3y_n \\ \sum x_n^2y_n \\ \sum x_ny_n \\ \sum y_n\end{bmatrix}$

Now, the tedious part is to find the inverse of that 4 by 4 matrix. That's going to involve some really messy formulas that I don't want to have to calculate! But once you have done that, a, b, c, and d are given by the components of $\displaystyle [(A^*A)^{-1}][A^* b]$. - Sep 8th 2009, 12:39 AMJ.J.Cubic Formula
The problem with matices is that I'm going to be writing the formulae into a computer program, which can't solve matrices, which is why I was looking for formulae like the ones above. Even a link with something that could help would be appreciated.

Thanks,

J.J. - Sep 8th 2009, 09:29 AMCaptainBlack
- Sep 9th 2009, 12:43 AMJ.J.Cubic Formula
To be honest, I'm don't know a lot about matrices, and it's been a while since I did anything involving matrices.

- Sep 9th 2009, 06:36 AMJ.J.
Say there are 4 x and y values called x1, x2, x3, x4 etc. For the first term of the A*A matrix, is that (x1^6 + x2^6 + x3^6 + x4^6) or

(x1 + x2 + x3 + x4)^6? Similarly, for the the A*b matrix, is that (x1^3 + x2^3 + x3^3 + x4^3)*(y1 + y2 + y3 + y4) or ((x1 + x2 + x3 + x4)^3)*(y1 + y2 + y3 + y4) or (x1^3*y1 + x2^3*y2 + x3^3*y3 + x4^3*y4)?

(It's been a while since I did maths using notation like this)