Parameter Estimation Proof

Hi there

Im trying to do a proof from a past paper at uni, but I'm not sure how to go about it. The proof is this below:

Prove, for any smooth probability density function f(x|\theta ), that:

\hat{\theta} D\to N({\theta}_{0}; \frac{1}{nI({\theta}_{0})})

(The D is meant to be on the arrow but I dont know how to get it there, think it means converges in distribution)

as n \to \infty

where {\theta}_{0} is the true value of the parameter \theta .

Hint: You may use the fact that the expectation of the score function is zero.

Its very similar to a proof from our textbook and i sort of understand the logic, but I'm not sure how to set it out for a normal distribution. I can prove that under smoothness the MLE of theta hat is consistent, which seems quite similar.. except that its as P\to f(x|\theta ) but I cant quite figure out how the two are related :/

I think I have to take MLE of the normal and use the weak law of large numbers to get the expected value of it and then do some manipulation, but i cant figure out where the Fisher information comes into play.

Im pretty hopeless at stats :/ Help would be much appreciated. Sorry if my post seems confusing, I'm quite new here, will be happy to clarify anything.