# Math Help - Optimisation of continuous function in [0,1]

1. ## Optimisation of continuous function in [0,1]

Hi everyone!

I've been struggling for a while with the following problem. I'd be very grateful if any of you have advice or pointers.

Basically there's a set of variables whose domain is [0,1] (extremes included) x1, ..., xn.

The function to maximise looks like this

It's a sum of products where variables can appear or not and the coefficient is 1 or -1.

Ex.

x1*x2 - x1*x3*x4 + x2*x4

more in general the function looks like

C0*x1*...*xn +
C11*x2*...*xn + ... + C1n*x1*...*xn-1 +
...
CN1*x1 + ... CNN*x2 +
C

with the constants being 1, -1 or 0

My intuition is that there the maximum of this function always belongs to the corners e.g. [1,0,0,0,1] or [1,1,1,0,0] but I haven't managed to prove it yet.

Thanks for the help.

2. That's an interesting problem there. At the very least, you can say that the maximum might occur on an edge, and not just a corner. Just take the function $f(x,y)=x.$ Its maximum occurs on the edge $\{1\}\times[0,1].$

Now if a particulat constant is zero, then that whole corresponding product might as well not appear in the formula, right? So if we go on with that convention, you can restrict your constants to be in the set $\{1,-1\}.$ If you think about taking partial derivatives, one fact that might be important is that, since every variable appears in only a linear fashion in each product, when you take the partial derivative of the function with respect to that variable, that variable will not appear in its partial derivative. That implies that that variable's second partial derivative will be zero. I wonder if you could leverage that fact, since the maximum will occur (you have a continuous function there) either on a boundary of the domain, or at a critical point. The critical points are points where every partial derivative of the function is zero. So, if you could show that the critical points are all on the boundary of the domain, or outside the domain, you'd be done.

Those are the thoughts I have at present.

3. Thanks a lot for the answer, at least I know I'm not struggling with a trivial problem .

You are right the maximum can be on the edge but the actual condition I had in mind is that it's sufficient to consider the corners to find the maximum.

I tried to play with the first derivative but I still haven't found anything useful. Considering some cases I noticed the partial first derivative can be zero for 0<x<1 but it happened only when it was obvious x was 0 or 1 in the maximum (because it appears with the same sign everywhere in the function).

The second derivative as you said has that interesting property but in general we cannot say anything on the eigenvalues of the Hessian since anything can happen despite the fact the main diagonal has only zeros.

Sorry for my imprecisions, I'm not really a mathematician .

4. You are right the maximum can be on the edge but the actual condition I had in mind is that it's sufficient to consider the corners to find the maximum.
Ah. So you're not concerned to find the coordinates of the maximum - you just want to find the maximum value? And you're claiming that looking only at corners is sufficient to find it for you? You might well be correct. In fact, I'm convinced you are correct.

Let's take a look at your function again, and let's look at one variable at a time. If we think about all the other variables being constant, where must the critical points be for this variable? Well, there are only two cases. Either the variable appears at least once in a nonzero product, or it doesn't. If it does, then by the construction of the function, we can factor that variable out of all the terms in which it appears, in which case we find that its first derivative is constant. Ergo, the only possible locations for the maximum occur when that variable is either 0 or 1. If the variable does not occur at least once in a nonzero product, then its first derivative is identically zero again, and we're back to the first case. Now repeat this analysis for all the other variables, and logically AND the results together.

Your result follows, in my opinion, from the fact that you have an affine function (f(x) = mx + b) of each variable, considered separately. Affine functions will always attain their extremal at one or both endpoints of a closed interval.

[EDIT] I'm sure you probably had thought of this, but if $N$ is the number of variables that appear in nonzero products, then the number of corners is $2^{N}.$

5. Thanks again Ackbeet, really appreciated.

There's still something missing.
When you consider a variable x it's correct to say that, fixed a value for the other variables then the first derivative is constant. The problem is, the constant depends on the other variables so there could be a point in the space where that constant is zero.
But right now while writing this I think I have a good idea in mind for an inductive proof on the number of variables!

Yes I know the number of points to consider is huge but the result is useful anyway .

6. Sure. I'd be curious to see your proof. Could you please post it either as you go, or when it's finished? Thanks!

7. The idea of using induction was elegant since the partial derivative can reuse the properties of the n-1 case. But it didn't work . So actually you were much closer. I managed to write a proof (I have to doublecheck it) that is based on the fact that if we have a maximum or minimum in a certain point the partial derivative wrt to x is zero then the original function doesn't change value with x, thus we can always find the same value for x in {0,1}.

Thanks again for your time and the support!

8. You're welcome. Have a good one, and please be sure to post interesting problems like this again!