Well, one thing you could do is write this in terms of unit step functions. Define
Then you have for your sum the following:
But this may be trading one unusable thing for another. What exactly do you mean by "linear representative"?
In your last edit of the OP, I think you meant to have a comma in-between the xi and the yi inside the argument of the function f. f is defined to be a function of two arguments.
I do not think your function can be represented by a linear expression. The problem is that f is itself nonlinear. Proof that is nonlinear:
Let , and let (Prime numbers are great for counter-examples!) For your function to be linear in means you'd have to have for all scalars and for all vectors So, check it out:
For this reason, I think you're barking up the wrong tree. You said in a PM that this question is a crucial part of your current work. If you could perhaps give me more context, I might be able to help you find a different way around this problem.
The detailed context is like this:
Given two vectors of variables A and B, and linear constraints on them
(such as a3 + b4 > 0, a1 + a2 =1, the number of constraints can be many), what are solutions of A and B
such that the *number* of pairwise subtractions that are positive (a_i -
b_i > 0) is maximized? or, in other word, what are solutions that maximize the number of "ai - bi > 0" being true?
Thank you !
the number of elements in A and B can be more than 100 and the number of constraints can be 200 or more. The solustion of A and B is the solution for each of elements in A and B that makes "ai - bi > 0" maximum.
For example, there are a1, a2, a3 in vector A and b1, b2, b3 in vector B.
The constraints are:
a1 - b1> a2 - b2;
a2 = 1;
a2 -b2 > a3 - b3;
b3 + a2 > 0;
one of the the solution for our problem can be: a1 = 5 b1 = 3, a2 = 1, b2 = 0, a3 = 0, b3 = 0;
because this makes: a1 - b1 > 0, a2 - b2 > 0 and a3 - b3 > 0. If we cannot fine a solution that makes all of the "ai - bi" greater than zero, the solution should be the one that makes as much as "ai - bi" greater than zero. Is there any existed theory discuss with this?
This problem simply isn't in the realm of linear programming. You're trying to maximize a nonlinear function with certain constraints. I'd recommend the book How to Solve It: Modern Heuristics, by Michalewicz and Fogel. You might be best off with an evolutionary algorithm or an annealing algorithm. I can imagine that with a hill-climbing algorithm, you're going to run into local maximums and get stuck. So you're going to need ways of escaping local maximums.
You could also hand your optimization problem off to gnumeric. Here's a thread talking about that.