It can be shown that you can use your set of observed values to create a matrix equation, which you can then use to evaluate and in your Least Squares line .
Suppose we had collected a set of data, i.e. . Obviously it is very unlikely that they will all be collinear, but we could treat them as such if we want a linear regression model of the form . When we substitute these observed values, we get a system of equations:
which we can then write in matrix form as
Clearly, this will have an infinite number of possibilities, since there are really an infinite number of lines that could fit the data well. However, the least squares line is the one which has evaluated using matrix methods. At the moment, we can't solve for since the matrix is not square (since only square matrices have inverses), but we can make it square by premultiplying both sides by the transpose first.
which has enabled us to evaluate the values of and in our least squares line model .
There you go, a real world example of systems with infinite solutions is for finding regression models with any real observed data.