Results 1 to 6 of 6

Math Help - Method of moments

  1. #1
    0-)
    0-) is offline
    Newbie
    Joined
    Jan 2008
    Posts
    21

    Method of moments

    (n is the size of the matrix and we are dealing with random matrices with entries taken from the normal distribution).
    'As n tends to infinity, the kth moment of the mean eigenvalue distribuition of the matrix tends to the kth moment of the standard semicircle law. By the method of moments, this shows that the mean eigenvalue distribution tends to the standard semicircle law as n gets larger.'

    Firstly, is this correct and secondly, how is the method of moments used here. I thought it only dealt with a sequences of distributions...?

    I haven't done any probability theory for years so I apologise if this is a simple question.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by 0-) View Post
    (n is the size of the matrix and we are dealing with random matrices with entries taken from the normal distribution).
    'As n tends to infinity, the kth moment of the mean eigenvalue distribuition of the matrix tends to the kth moment of the standard semicircle law. By the method of moments, this shows that the mean eigenvalue distribution tends to the standard semicircle law as n gets larger.'

    Firstly, is this correct and secondly, how is the method of moments used here. I thought it only dealt with a sequences of distributions...?

    I haven't done any probability theory for years so I apologise if this is a simple question.
    I think you don't fully understand what the "mean eigenvalue distribution" is. Given a random N\times N hermitian matrix M with (random) (real) eigenvalues \lambda_1,\ldots,\lambda_N[/math (with multiplicity, in any order), the mean eigenvalue distribution is the distribution \mu on \mathbb{R} defined by: for all bounded measurable f:\mathbb{R}\to\mathbb{R},

    \int f\,d\mu=E\left[\frac{1}{n}\sum_{i=1}^n f(\lambda_i)\right].
    or equivalently, for any measurable subset A of \mathbb{R},

    \mu(A)=\frac{1}{n}\sum_{i=1}^n P(\lambda_i\in A).
    (for this one to make full sense, we would need to choose an order, for instance \lambda_1\leq\cdots\leq\lambda_n)

    All I want to highlight is that \mu is just a probability distribution on \mathbb{R}. Therefore if you have a sequence (\mu_n))_n of distributions defined similarly with respect to other matrices (of any size), this is just a sequence of distributions on \mathbb{R}, and the method of moments would apply as usual (or "as the wikipedia says").

    In the case of Wigner theorem (in the way you state it, which is a weak form), the proof that \int x^k d\mu_n(x) \to_n \int x^k d\sigma(x) for all k, where \sigma is the semi-circle law, goes through combinatorial arguments (I'll give a reference); this holds under suitable normalization of the matrices or suitable choice of the variances of the Gaussian entries (in that sense, your statement is not complete).

    And you can justify that the moments of the semi-circular law characterize it (i.e. that the condition given in the wikipedia holds) by showing that they grow at most exponentially: \int x^{2k} d\sigma(x)\leq C^k for some C(=4) (the odd moments are 0, and for \mu_n as well, so the odd moments don't matter), so that the moment generating function (or Laplace transform) \int e^{\lambda x} d\sigma(x)=\sum_k \frac{\lambda^k}{k!}\int x^k d\sigma(x) exists; this function depends only on the moments, and it characterizes the distribution (classic...) hence the moments characterize the distribution. You'll find the computation of the moments in the reference. There is probably an elementary proof of the theorem of moments in this specific case (using the above bound and Laplace transforms), rather than the theorem from the wikipedia which is "optimal" in a sense but probably not easy (actually I don't know a proof so I may be wrong).

    You'll find every detail I skipped in the first pages of this (massive) introduction to random matrices by A.Guionnet and O.Zeitouni. They actually prove a stronger result. The sequence of mean eigenvalue distributions is denoted \overline{L}_N, not to be confused with L_N which is the (random) uniform distribution on the set of eigenvalues.

    When we are dealing with (L_N)_N, we are actually dealing with a sequence of random probability distributions (namely, the uniform distribution on the set of the eigenvalues of the matrices in the sequence). Therefore it makes little sense to say that L_N converges in distribution to something. It could however converge in distribution almost-surely (i.e. for almost all matrices, the corresponding sequence of "uniform distributions on eigenvalues" converges in distribution), or in probability (it takes more care to define). In the case of Wigner theorem in a stronger form, we have a statement like, loosely speaking: "In probability, the sequence of "uniform distributions on eigenvalues" converges in distribution"... This is what the reference explicitates and proves, using the convergence of the moments of the mean eigenvalue distribution (yours) and an additional property quantifying how much the eigenvalue distribution fluctuates from the mean eigenvalue distribution.

    I hopes this clarifies a few things. Tell me if not.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    0-)
    0-) is offline
    Newbie
    Joined
    Jan 2008
    Posts
    21
    Thank you for that. I read through your post but there were some things I didn't understand. I'm currently doing a mini-project on Wigner's Law but focussing on the combinatoric proof and avoiding the probability/linear algebra so a lot of this is new to me.

    You're right that I didn't understand the 'mean eigenvalue distribution'. What are the measurable functions f that you mention when defining this distribution? I don't really see why/how they are part of this definition.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by 0-) View Post
    You're right that I didn't understand the 'mean eigenvalue distribution'. What are the measurable functions f that you mention when defining this distribution? I don't really see why/how they are part of this definition.
    There are several way to define a (probability) measure \mu. The most direct way (the one that is closest to the definition) is by simply giving the measure \mu(A) of measurable subsets (either of all of them, or a family that would span the \sigma-algebra, like intervals on \mathbb{R}). In some situations, it is more natural to give the integral \int f d\mu of measurable functions; letting f={\bf 1}_A gives the usual definition. Both definitions I gave are equivalent. It is however more self-evident in the definition through a measurable function that the measure depends only on the set of eigenvalues, not on an ordering of them, that's why I gave both.
    Follow Math Help Forum on Facebook and Google+

  5. #5
    0-)
    0-) is offline
    Newbie
    Joined
    Jan 2008
    Posts
    21
    Quote Originally Posted by Laurent View Post
    There are several way to define a (probability) measure \mu. The most direct way (the one that is closest to the definition) is by simply giving the measure \mu(A) of measurable subsets (either of all of them, or a family that would span the \sigma-algebra, like intervals on \mathbb{R}). In some situations, it is more natural to give the integral \int f d\mu of measurable functions; letting f={\bf 1}_A gives the usual definition. Both definitions I gave are equivalent. It is however more self-evident in the definition through a measurable function that the measure depends only on the set of eigenvalues, not on an ordering of them, that's why I gave both.
    Oh OK. Thank you.

    In a book I have, the kth moment of the mean eigenvalue distribution of a matrix \mathbf{T} is given by

    \displaystyle \tau_n(\mathbf{T}^k),

    where tau_n is the tracial state. Where does this come from?
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor

    Joined
    Aug 2008
    From
    Paris, France
    Posts
    1,174
    Quote Originally Posted by 0-) View Post
    Oh OK. Thank you.

    In a book I have, the kth moment of the mean eigenvalue distribution of a matrix \mathbf{T} is given by

    \displaystyle \tau_n(\mathbf{T}^k),

    where tau_n is the tracial state. Where does this come from?
    If you refer to the definition of the mean eigenvalue distribution with f(x)=x^k, you get:

    \int x^k d\mu(x)=\frac{1}{n}E\left[\sum_{i=1}^n (\lambda_i)^k\right],

    and you know (this is not probability) that {\rm Tr}(A^k)=\sum_{i=1}^k (\lambda_i)^k if \lambda_1,\ldots,\lambda_n are the eigenvalues of A and {\rm Tr} is the trace. Thus, the k-th moment of the mean eigenvalue distribution is

    \int x^k d\mu(x)=E[\frac{1}{n}{\rm Tr}(A^k)].

    This must correspond to your statement (I don't know what is called the tracial state).
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Method of moments
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: December 13th 2009, 09:00 AM
  2. Method of moments
    Posted in the Advanced Statistics Forum
    Replies: 7
    Last Post: October 26th 2009, 03:45 AM
  3. Method of Moments
    Posted in the Advanced Statistics Forum
    Replies: 2
    Last Post: August 6th 2009, 11:49 PM
  4. method of moments
    Posted in the Advanced Statistics Forum
    Replies: 19
    Last Post: February 27th 2009, 08:40 AM
  5. Method of Moments
    Posted in the Advanced Statistics Forum
    Replies: 1
    Last Post: February 10th 2009, 04:39 AM

Search Tags


/mathhelpforum @mathhelpforum