# Gaussian Elimination

• Jun 8th 2012, 03:58 AM
Euclid
Gaussian Elimination
I would normally use Gaussian Elimination to solve a linear system. If we have more unknowns than equations we end up with an infinite number of solutions. Are there any real life applications of these infinite solutions? I can think of solving puzzles like Sudoku but are there others?
• Jun 8th 2012, 04:22 AM
Prove It
Re: Gaussian Elimination
Quote:

Originally Posted by Euclid
I would normally use Gaussian Elimination to solve a linear system. If we have more unknowns than equations we end up with an infinite number of solutions. Are there any real life applications of these infinite solutions? I can think of solving puzzles like Sudoku but are there others?

Well, suppose you collected some data and wanted to find the line of best fit, so that you could predict values for the Dependent Variable for any value of the Independent Variable (provided this value of the IV is within the observed values of the IV). The line that best fits the data is the "Least Squares line", which is the line that has the property of having all the squared deviations (differences between the observed values of the DV and the predicted values of the DV) being at a minimum.

It can be shown that you can use your set of observed values to create a matrix equation, which you can then use to evaluate \displaystyle \begin{align*} c_0 \end{align*} and \displaystyle \begin{align*} c_1 \end{align*} in your Least Squares line \displaystyle \begin{align*} y = c_0 + c_1x \end{align*}.

Suppose we had collected a set of data, i.e. \displaystyle \begin{align*} \left\{ (x_1, y_1), (x_2, y_2), (x_3, y_3), \dots , (x_n, y_n) \right\} \end{align*}. Obviously it is very unlikely that they will all be collinear, but we could treat them as such if we want a linear regression model of the form \displaystyle \begin{align*} y = c_0 + c_1x \end{align*}. When we substitute these observed values, we get a system of equations:

\displaystyle \begin{align*} y_1 &= c_0 + c_1x_1 \\ y_2 &= c_0 + c_1x_2 \\ y_3 &= c_0 + c_1x_3 \\ \vdots \\ y_n &= c_0 + c_1x_n \end{align*}

which we can then write in matrix form as

\displaystyle \begin{align*} \left[\begin{matrix} y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_n \end{matrix}\right] &= \left[\begin{matrix}1 & x_1 \\ 1 & x_2 \\ 1 & x_3 \\ \vdots \\ 1 & x_n \end{matrix}\right]\left[\begin{matrix} c_0 \\ c_1 \end{matrix}\right] \\ \mathbf{Y} &= \mathbf{X}\mathbf{c} \end{align*}

Clearly, this will have an infinite number of possibilities, since there are really an infinite number of lines that could fit the data well. However, the least squares line is the one which has \displaystyle \begin{align*} \mathbf{c} \end{align*} evaluated using matrix methods. At the moment, we can't solve for \displaystyle \begin{align*} \mathbf{c} \end{align*} since the matrix \displaystyle \begin{align*} \mathbf{X} \end{align*} is not square (since only square matrices have inverses), but we can make it square by premultiplying both sides by the transpose first.

\displaystyle \begin{align*} \mathbf{Y} &= \mathbf{X}\mathbf{c} \\ \mathbf{X}^T\mathbf{Y} &= \mathbf{X}^T \mathbf{X}\mathbf{c} \\ \left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T\mathbf{Y} &= \left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T\mathbf{X}\mathbf{c} \textrm{ provided } \mathbf{X}^T\mathbf{X}\textrm{ is invertible} \\ \left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T\mathbf{Y} &= \mathbf{I}\mathbf{c} \\ \left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T\mathbf{Y} &= \mathbf{c} \end{align*}

which has enabled us to evaluate the values of \displaystyle \begin{align*} c_0 \end{align*} and \displaystyle \begin{align*} c_1 \end{align*} in our least squares line model \displaystyle \begin{align*} y = c_0 + c_1x \end{align*}.

There you go, a real world example of systems with infinite solutions is for finding regression models with any real observed data.
• Jun 8th 2012, 08:07 AM
Deveno
Re: Gaussian Elimination
even more generally, we can have an undetermined system of linear equations (we know some stuff, but we don't know everything). so we first solve the related homogeneous problem (where our "b" in the matrix equation Ax = b is 0). that is, we find the null space of the matrix A.

then, to recover every possible solution x to Ax = b, we can find just ONE solution (it doesn't matter which one), say x0, and write the general solution as:

x0 + x, where x is in the null space of A.

here is a specific example:

let P2[x] be the vector space of all polynomials in x with real coefficients, with degree 2 or less.

let D be the differentiation operator (so D(p(x)) = p'(x)). suppose we want to find all polynomials p(x), such that p'(x) = g(x).

first, we find all polynomials with p'(x) = 0, which are the constant polynomials, p(x) = C.

then we find some particular polynomial with p'(x) = g(x), which we can do by integrating g(x). suppose f(x) = ∫g(t)dt (where we integrate from 0 to x, let's say).

our solution set then consists of {f(x) + C: C in R}, which is a line in P3(R) (i bet you didn't know you were actually doing some linear algebra in calculus, hmm?).

this technique, of finding (the usually easier to find) the homogeneous solution, and then finding a specific solution to get the general solution, is used extensively in solving differential equations (the example i gave is a very simple differential equation: y' = g(x), where i used polynomials just to keep it simple).

in other words, even if you can't find a "unique" solution, you can still determine "parameters" any solutions depends on. typically, what this does, is reduce the number of variables you're dealing with, and so still represents progress (for example, it tells you how many "initial conditions" you may need to specify, to get uniqueness).