Originally Posted by

**cryptic26** Question: Let Xi , i = 1, 2, 3... denote random variables which are independent and Bernoulli distributed with P(Xi=1) = pi and P(Xi=0) = (1-pi). It is obvious that 0<= pi<=1. Choose pi to maxmize the mean of

Sn = summation (Xi) , i = 1,2...n.

Approach: Here is my approach.

S2 has probability distribution given as

P(S2=0) = (1-p1)*(1-p2)

P(S2=1) = (1-p1)*p2 + p1*(1-p2)

P(S2=2) = p1*p2.

Hence, the mean of S2 shall be = 0 *P(S2=0)+ 1*P(S2=1) + 2*P(S2=2)

= 1*(1-p1)*p2+p1*(1-p2)+ 2p1p2 .

(which is maximum if p1 = p2 =1).

The same can be generalized for any Sn. I am skipping the steps but the answer is the same. The maximum possible value of mean of Sn = n. and is achieved at pi =1 , i=1,2, 3....

Is there a better proof or is my proof correct?

I also tried partial differnential w.r.t each of the pi and then have n equations, but even for n=2, I get absurd values of pi's (essentially less than zero). Any help shall be appreciated.