Suppose y is a random variable taking on one of n known values : a_1, a_2, ... a_n.

y either has a distribution p given by P(y = a_j) = p_j

or it has a distribution q given by P(y = a_j) = q_j.

All p_j's are nonnegative and sum to 1, as well as all q_j's.

Based on a single observation of y, we wish to guess whether it has a distribution of p or q. That is, for each possible outcome a_j, we will assert with probability x_j that the distribution is p and with probability 1-x_j that the distribution is q.

We wish to determine the probabilities x_1, x_2, ... x_n such that the probability of saying the distribution is p when it is in fact q has probability no larger than b, where b is some small positive value (such as 0.05).

Further, given this constraint, we wish to maximize the probability that we say the distribution is p when it is in fact p.

Formulate this maximization problem as a linear programming problem.
I am baffled by this. Any suggestions would be greatly appreciated.