Show that P(W=k)=q^kp (k=0,1,2..) is the distribution of the number of failures before the first success in Bernoullil p trials.
b)find p(W>k) (k=0,1,...)
c) find e(w)
d) find var(w)
A) Say we have an infinite number of independent, identically distributed bernoulli random variables $\displaystyle X_1, X_2 ... $ that we are sampling sequentially, and let W be the number of faliures before the first success. Then, $\displaystyle P(W = w) = P(X_1 = 0, X_2 = 0, ... , X_w = 0, X_{w + 1} = 1)$, and you can factor that due to the independence of the Bernoulli's, noting the $\displaystyle P(X_i = 0)$ = (1 - p) and $\displaystyle P(X_i = 1) = p$. This gets you the desired result.
B) Doing this in terms of the Bernoulli's, $\displaystyle P(W > k) = P(X_1 = 0, X_2 = 0, ... X_k = 0)$, and you can work that out no problem due to independence.
Calculating the expected value and the variance are a little trickier.$\displaystyle EW = \sum_{w = 0} ^ \infty w (1 - p)^w p$, which requires a little calculus trick to evaluate (at least, the way that I've always done this does). Basically, what you want to do is move a (p) and a 1 / (1 - p) out of the summation, then express whatever is left in the summation as the derivative of another function. Then, you can switch the differentiation and the summation and evaluate the summation because it will be a geometric sum, and evaluate the derivatives and you'll have a closed form expression. All that stuff probably just confused you and is kind of hard to follow, but I don't feel like writing all the steps out . Once you figure that out, you can calculate the variance using the formula $\displaystyle Var W = E[W(W - 1)] - EW * (EW - 1)$, using the same trick to get the first term of that.