It is the correct idea. Like: we have , hence . And to compute this expectation, you consider as a constant, and integrate with respect to the distribution of (given ), which is uniform on (because and are independent). Thus, you can write etc. Perhaps you would prefer writing and use little x afterward; sometimes this avoids confusion.

It depends, just like usual expectation. Sometimes it is much shorter to find the conditional expectation. Sometimes (like in cases with densities), it is however simpler to give the conditional distribution, because the cond. expec. requires the cond. distr. and needs an extra integration.Also, is it more logical to first find the conditional distribution before the conditional expectation ?

By the way, it is possible to deduce the definition of conditional expec. from that of the conditional distr. (but the existence of conditional distributions is very delicate), and it is also possible to define conditional exp. only. (there are several simpler proofs of existence).

If has a density and you need the law of given , you know the formula for that ( ).

If is discrete, you may condition by , hence no specific problem.

In other cases, you can procede along the definition: if, for all measurable , (for some family of r.v. , ), then the conditional law of given is the law of . This may get messy.

In the present case, the easiest way is definitely to compute the conditional distribution function. For all , . Compute the expectations depending on by considering as a constant (because of the independence).

In other situations, a conditional characteristic function could be used as well. Or a conditional moment generating function, or whatever suits better... Any tool for finding distributions, as long as the computations are manageable.

Finally, you can check your work by computing the cond. expectation from the cond. distribution.