I'm trying to show that a problem (a minimal L1-norm interpolation problem) is a linear programming problem. As I understand it the objective function is something like
argmin ||w|| = sum((w1*x - y) + (w2*x-y). . . wm*x-y ) where m = no of parameters, w, and x and y are the coordinates of a datum.

which is a linear function, right. But how is there an inequality or other constraint implied here? It seems its simply to minimise the "error".

Any comments are much appreciated. Ta, MD