I am having trouble with this question

Unbiasedness and small variance are desirable properties of estimators. However, you can imagine situations where

a trade-off exists between the two: one estimator may be have a small bias but a much smaller variance than

another, unbiased estimator. The concept of “mean square error” estimator combines the two concepts. Let ˆμ be

an estimator of μ. Then the mean square error (MSE) is defined as follows:

MSE(ˆμ) = E [(ˆμ − μ)^2].

Prove that

MSE(ˆμ) = (bias)^2 +Var(ˆμ).

I emailed my professor and he said this

Hint: subtract and add in E(ˆμ) in E[(ˆμ − μ)^2].

But I'm not really sure what he means.

I'd really appreciate any help with this. Thank You.