Originally Posted by

**0-)** (n is the size of the matrix and we are dealing with random matrices with entries taken from the normal distribution).

'As n tends to infinity, the kth moment of the mean eigenvalue distribuition of the matrix tends to the kth moment of the standard semicircle law. By the method of moments, this shows that the mean eigenvalue distribution tends to the standard semicircle law as n gets larger.'

Firstly, is this correct and secondly, how is the

method of moments used here. I thought it only dealt with a sequences of distributions...?

I haven't done any probability theory for years so I apologise if this is a simple question.

I think you don't fully understand what the "mean eigenvalue distribution" is. Given a random $\displaystyle N\times N$ hermitian matrix $\displaystyle M$ with (random) (real) eigenvalues $\displaystyle \lambda_1,\ldots,\lambda_N[/math$ (with multiplicity, in any order), the mean eigenvalue distribution is the distribution $\displaystyle \mu$ on $\displaystyle \mathbb{R}$ defined by: for all bounded measurable $\displaystyle f:\mathbb{R}\to\mathbb{R}$,

$\displaystyle \int f\,d\mu=E\left[\frac{1}{n}\sum_{i=1}^n f(\lambda_i)\right]$.

or equivalently, for any measurable subset $\displaystyle A$ of $\displaystyle \mathbb{R}$,

$\displaystyle \mu(A)=\frac{1}{n}\sum_{i=1}^n P(\lambda_i\in A)$.

(for this one to make full sense, we would need to choose an order, for instance $\displaystyle \lambda_1\leq\cdots\leq\lambda_n$)

All I want to highlight is that $\displaystyle \mu$ is just a probability distribution on $\displaystyle \mathbb{R}$. Therefore if you have a sequence $\displaystyle (\mu_n))_n$ of distributions defined similarly with respect to other matrices (of any size), this is just a sequence of distributions on $\displaystyle \mathbb{R}$, and the method of moments would apply as usual (or "as the wikipedia says").

In the case of Wigner theorem (in the way you state it, which is a weak form), the proof that $\displaystyle \int x^k d\mu_n(x) \to_n \int x^k d\sigma(x)$ for all $\displaystyle k$, where $\displaystyle \sigma$ is the semi-circle law, goes through combinatorial arguments (I'll give a reference); this holds under suitable normalization of the matrices or suitable choice of the variances of the Gaussian entries (in that sense, your statement is not complete).

And you can justify that the moments of the semi-circular law characterize it (i.e. that the condition given in the wikipedia holds) by showing that they grow at most exponentially: $\displaystyle \int x^{2k} d\sigma(x)\leq C^k$ for some $\displaystyle C(=4)$ (the odd moments are 0, and for $\displaystyle \mu_n$ as well, so the odd moments don't matter), so that the moment generating function (or Laplace transform) $\displaystyle \int e^{\lambda x} d\sigma(x)=\sum_k \frac{\lambda^k}{k!}\int x^k d\sigma(x)$ exists; this function depends only on the moments, and it characterizes the distribution (classic...) hence the moments characterize the distribution. You'll find the computation of the moments in the reference. There is probably an elementary proof of the theorem of moments in this specific case (using the above bound and Laplace transforms), rather than the theorem from the wikipedia which is "optimal" in a sense but probably not easy (actually I don't know a proof so I may be wrong).

You'll find every detail I skipped in the first pages of this (massive) introduction to random matrices by A.Guionnet and O.Zeitouni. They actually prove a stronger result. The sequence of mean eigenvalue distributions is denoted $\displaystyle \overline{L}_N$, not to be confused with $\displaystyle L_N$ which is the (random) uniform distribution on the set of eigenvalues.

When we are dealing with $\displaystyle (L_N)_N$, we are actually dealing with a sequence of random probability distributions (namely, the uniform distribution on the set of the eigenvalues of the matrices in the sequence). Therefore it makes little sense to say that $\displaystyle L_N$ converges in distribution to something. It could however converge in distribution almost-surely (i.e. for almost all matrices, the corresponding sequence of "uniform distributions on eigenvalues" converges in distribution), or in probability (it takes more care to define). In the case of Wigner theorem in a stronger form, we have a statement like, loosely speaking: "In probability, the sequence of "uniform distributions on eigenvalues" converges in distribution"... This is what the reference explicitates and proves, using the convergence of the moments of the mean eigenvalue distribution (yours) and an additional property quantifying how much the eigenvalue distribution fluctuates from the mean eigenvalue distribution.

I hopes this clarifies a few things. Tell me if not.