So, I'm taking an inference course. Working through some (optional) homework, the theme of the current problems seems to be (1) find a Bayes estimator of theta and then (2) show that "blah" is an empirical Bayes estimator of theta. The issue is that, unless I completely zoned out in class, we didn't really give a good definition of empirical Bayes. The idea seems to be that you come up with some estimate of the some hyperparameters using the marginal distribution of your data and then plug those into your Bayes estimate, but I feel like I'm doing something wrong because everything I'm doing feels incredibly ad-hoc. Sparing the specifics of the problem, and example is

(a) Show that the Bayes estimator of \theta is mX/(m + 1).
(b) Show that an empirical Bayes estimator of \theta is  [1 - \frac{p\sigma^2}{\|X\|^2}]X

and my solution to part (b) is showing that E\|X^2\| = p\sigma^2 (m + 1) so that we can use \frac{p \sigma^2}{\|X^2\|} to estimate 1/(m + 1).

This just seems so ad-hoc that it feels wrong. I guess my question is if this really is what Empirical Bayes Estimation entails in these sorts of problems.