Suppose that items belong to to one of two possible classes, C =0, 1 (0="negative", 1 = "positive"). Suppose that some feature X is known (measured) for each item, but the class membership is unknown. Suppose that X| C = 0~ f0 (x) and X|C =1 ~ f1(x) (that is, f i (x) are the conditional densities of X given C= i). Consider the following classification function:

hat -Ck (x) = 1 <---> f1 (x) kfo(x)

where k is chosen such that
Specificity = P( hat- Ck=0|C=0) = 1 - a, 0<a<1

Let hat - C(x) be any other classifier with the same specificity, that is

int [1-hatC(x)] f0(x) dx = 1-a
and so int [hat C (x) fo(x) ]dx = a

Then Sensitivity (hat Ck) > Sensitivity (hat C)


"Proof":
int (neg infin to infin) (hat Ck (x) - C(x)) f1(x) dx > 0

1. I can't seem to be able to solve the expression after multiplying f0(x)/f0(x)

2. breaking it into 2 integrals where int (hat ck = 1) (hat Ck (x) ) f1(x) dx
+ int (Ck = 0) (c(x))f1(x)dx)
(the first part should result a positive value and 2nd part negative)

3. suppose that f0 = (0,1) and f1 = N(1,1) what value of k miimizes the missclassification error?
I believe drawing this would be easier to show, but i don't know where to start.....