Inclusion-Exclusion - Maximizing Probabilities

Let be three events, where . Also, . Using the inclusion-exclusion principle,

Thus, I would like to prove that subject to has solution . In order to do this I used Lagrange multipliers, and after a few manipulations on the equations, I come up with this answer.

**THE QUESTION IS**: how can I generalize this for events? More specifically, I want to prove that subject to has solution .

**PS**: Some directions to solve this problem would be useful too (or a reference where it was already solved).....

Re: Inclusion-Exclusion - Maximizing Probabilities

Hey andre.vignatti.

Is there a way to get a factorization of this function with respect to a particular variable?

What I mean by this is basically not to get a unique global factorization where you have products of terms, but if you can have a function to generate a factorization based on a particular variable (like factorizing with respect to p1 for example)?

The reason why I suggest this is because I suspect that factorizing with respect to a specific variable should always give the same form regardless of the variable.

As an example with three probabilities you have p1 + p2 + p3 - p1p2 - p1p3 - p2p3 + p1p2p3 gives with respect to the three variables:

p1(1 - p2 - p3 + p2p3) + p2 + p3 - p2p3

p2(1 - p1 - p3 + p1p3) + p2 + p3 - p1p3

p3(1 - p1 - p2 + p1p2) + p1 + p2 - p1p2

This helps immensely in the Lagrange multiplier process when it comes to doing derivations for n-variables. If you can show that the form is the same for every kind of factorization, then when you use the Lagrange technique, you should get a bunch of these terms that are all similar.

Re: Inclusion-Exclusion - Maximizing Probabilities

Hello Chiro! Thanks for the answer. In fact, I already try some factorization like you sugest. I wonder if the factorization that you proposed is really useful in solving the three probabilites case.... I could not understand why, if I use the factorization proposed, could I obtain that .

So, let me explain how I obtained the three probabilities answer, and then I will ask a easier question that actually can solve my problem.

====================================

We want to maximize

subject to

Using the Lagrange multipliers, we get

So we get the following system (*****):

Using the fact that in the top three rows, we get:

Multiplying on the -th row:

Isolate we conclude that (*****) . But

So, for each pair of the equalities (*****), we have

Combining these rows, THE ONLY SOLUTION IS , and because , we conclude that

====================================

In the first system obtained by the Lagrange multipliers (see *****), one possible solution for the system is setting . This clearly solves the system, as each row must be equal to the other (except for the last row). Therefore, there is no reason to carry the calculations until the end; the only reason would be to conclude that the system has a unique solution.

So, in my attempt to generalize the three probabilities case to probabilites, I think I could easily argue that the solution is a feasible solution to the system, although I cannot easily argue that it is the ONLY solution.

So finally, __MY QUESTION IS__: If one solution (maybe not unique) to the Lagrange multipliers system is obtained, can I assume that this solution would maximize the original objective function???

Re: Inclusion-Exclusion - Maximizing Probabilities

To try and answer your question mathematically, one attempt you could do is that if your function can be written in terms of

p1*g(p2,...,pn) + h(p2,....,pn) = 0

p2*g(p1,...,pn) + h(p1,....,pn) = 0

and so on, then when you do the differentiation with respect to that variable you get say for d/dp1:

g'(p2,....,pn) = 0.

Now if these functions are of the same form then you can collect, simplify and see what happens.

I don't want to say yes to your answer even though it seems likely that its true.

Re: Inclusion-Exclusion - Maximizing Probabilities

Quote:

Originally Posted by

**andre.vignatti** Let

be three events, where

. Also,

. Using the inclusion-exclusion principle,

Thus, I would like to prove that

subject to

has solution

. In order to do this I used Lagrange multipliers, and after a few manipulations on the equations, I come up with this answer.

**THE QUESTION IS**: how can I generalize this for

events? More specifically, I want to prove that

subject to

has solution

.

**PS**: Some directions to solve this problem would be useful too (or a reference where it was already solved).....

Ahem. Are you sure that **maximizes** f? What about ? Doesn't that give you a bigger f?

Re: Inclusion-Exclusion - Maximizing Probabilities

I'm assuming for all practical purposes, all the probabilities are non-zero ones (even though zero is a valid probability).

So you would add the constraint that p1 > 0, p2 > 0, ..., pn > 0.

In most real situations, we don't consider zero probabilities especially if we have a population distribution (there's no point in doing so).

Re: Inclusion-Exclusion - Maximizing Probabilities

This is a case where it might initially be easier to understand written in English than using definitions and sigma-sum notations:

For a generic n, f(p1, p2, ... pn) = (sum p's 1 at a time) - sum(p's 2 at a time) + sum(p's 3 at a time) - + ... (-1)^(n+1) sum(p's n at a time).

Note sum(p's n at a time) is just the product of all the p's.

Let's declare the domain of f to be: all the p's are in [0, 1).

Now fix integer k, 1<=k<=n

(sum p's 1 at a time) = pk + (sum p's other than pk, 1 at a time)

(sum p's 2 at a time) = pk(sum p's other than pk, 1 at a time) + (sum p's other than pk, 2 at a time)

(sum p's 3 at a time) = pk(sum p's other than pk, 2 at a time) + (sum p's other than pk, 3 at a time)

(sum p's 4 at a time) = pk(sum p's other than pk, 3 at a time) + (sum p's other than pk, 4 at a time)

...

(sum p's (n-1) at a time) = pk(sum p's other than pk, (n-2) at a time) + (sum p's other than pk, (n-1) at a time)

(sum p's n at a time) = pk(sum p's other than pk, (n-1) at a time) + (0)

That 0 is because there aren't n p's there once you exclude pk, so (sum p's other than pk, n at a time) isn't defined. Look back at f and you'll see this must be zero.

With that in mind, let = (sum p's other than pk, taken j at a time) for 0<=j<=n.

Note that . Also, will define . (This is just a matter of convenience for later sums.)

Note that the *value* of when the values of the p's are evaluated, but that's not how the function itself is defined, but rather is the arithmetic consequence of our knowing the constraint that the sum of the p's is 1. Keeping this straight is conceptually important since we'll be taking derivatives.

I want to make clear that, by definition, .

Note that, by construction (and with the case hopefully clarified), have: .

Now from these "(sum p's 3 at a time) = pk(sum p's other than pk, 2 at a time) + (sum p's other than pk, 3 at a time)" get:

(sum p's j at a time) .

Check boundary special cases:

(sum p's 1 at a time) . Check.

(sum p's n at a time) . Check.

All that is to get this (and "this" is to prepare to solve using Lagrange Multipliers):

(all the p's taken i at a time)

And thus:

So far, it's all formalizing/defining/structuring the problem. No meat. Here comes the meat: Will now algebraically manipulate so that can write in terms of and only:

(now set i = j-1 in the summation)

.

Have just shown that: .

(Here's why had to be excluded.) Solving that for gives:

.

Now do Lagrange: Let .

Then the constraint is , so find staionary points at via Lagrange:

, which gives that:

.

Thus

.

(To solve, this will require that , so . Check this requirement at the end.)

Have .

Thus .

So at a stationary point, all the equal the same constant ,

and so they're all equal to one another. When that's combined with the constraint , it follows that:

If is a staionary point for , then, if ,

must have that .

I'm not going to do the check that , though it is technically necessary.

--------------------------------------

Putting it all together: If none of the p's are 1, then the only stationary point for f is at (1/n, 1/n, 1/n, ... 1/n).

--------------------------------------

Aside: To decide that it's a local maximum, compute the Bordered Hessian at the point (1/n, 1/n, 1/n, ...). You can get all the 2nd partials easily:

Then when , and

.

Therefore if , and if .

Re: Inclusion-Exclusion - Maximizing Probabilities

Quote:

Originally Posted by

**chiro** I'm assuming for all practical purposes, all the probabilities are non-zero ones (even though zero is a valid probability).

So you would add the constraint that p1 > 0, p2 > 0, ..., pn > 0.

In most real situations, we don't consider zero probabilities especially if we have a population distribution (there's no point in doing so).

OK, since my first hint appears to be too subtle:

You have not maximized f by choosing all the probabilities equal. You have minimized f.

Re: Inclusion-Exclusion - Maximizing Probabilities

Hello, Someone knows where I can find the proof from comment #7 in the literature?

Thanks in advance