# Thread: Rayleigh estimator and confidence intervals for small N

1. ## Rayleigh estimator and confidence intervals for small N

I have a small number of samples from a Rayleigh process, and I am trying to estimate the Rayleigh parameter sigma.

This popular exercise suggests that $\displaystyle \widehat{ \sigma} = \frac{\sum r^{2}_{i} }{2n}$ is an unbiased estimator for sigma. Except that it doesn't come close to the underlying sigma unless you take the square root of the result, and even then it is biased for small n! (I verified this via monte carlo simulation.)

So for small Rayleigh populations is there a BLUE, or a correction term to the MLE $\displaystyle \widehat{ \sigma} = \sqrt{\frac{\sum r^{2}_{i} }{2n}}$?

Furthermore, what can I say about confidence in the parameter estimate for a given population of n samples?

2. ## Re: Rayleigh estimator and confidence intervals for small N

Hey dbooksta.

Did you try using the MLE estimator to get the estimator of sigma? Once you do that, you can compare it to the MOM estimator and see how the value of n impacts the biased-ness of the estimator which will allow you to add a correction factor that is a function of n.

3. ## Re: Rayleigh estimator and confidence intervals for small N

Please have some patience with me -- I managed to get a B.S. in Math without covering any formal statistics:

The MLE I give is an estimator of the Rayleigh parameter sigma. I assume the source of the bias is analogous to the bias in standard deviation estimators resulting from the presence of an exponent in the sample sum. (In fact, the Rayleigh distribution has relationships to other common distributions, so I am hoping no new ground needs to be broken to answer my question!)

Checking MOM: The distribution only has one parameter, so MOM only has us looking at the first moment, $\displaystyle m_1 = \sigma \sqrt{\frac{\pi}{2}}$, or $\displaystyle \sigma = m_1 \sqrt{\frac{2}{\pi}}$: a constant times the sample mean, which I guess tells us once again that the bias is a result of the concavity of the $\displaystyle \sqrt{n}$ in the estimator for sigma. Have I used MOM correctly?

Again, given the similarities, I wouldn't be surprised to see $\displaystyle c_4$ show up here, but I can't make the connection.

P.S. Any hints on why the two Latex expressions in the third paragraph aren't rendering? [Edit: Fixed them -- had used \TEX instead of /TEX to terminate!]

4. ## Re: Rayleigh estimator and confidence intervals for small N

I don't know what is wrong with your latex but if anyone knows please reply so we can fix it up.

5. ## Re: Rayleigh estimator and confidence intervals for small N

And further confusion: the MLE I am frequently finding as I research this is for the Rayleigh parameter squared -- i.e., $\displaystyle \sigma^2 = \vartheta = \frac{\sum r^{2}_{i} }{2n}$. As I mentioned initially, that appears to be unbiased, and both the Rayleigh pdf and CDF only refer to $\displaystyle \sigma^2$. So how do we introduce bias when we use the square root of that unbiased parameter estimation in the formulas for moments?

6. ## Re: Rayleigh estimator and confidence intervals for small N

Thats OK since you can use the invariance principle for the MLE estimator.

You will get bias and it may be complicated, but in terms of the point estimate the square root should be OK.

Just out of curiosity, do you need to measure sigma as opposed to sigma^2? If so what is the reason and the context of your decision?

7. ## Re: Rayleigh estimator and confidence intervals for small N

Originally Posted by chiro
Just out of curiosity, do you need to measure sigma as opposed to sigma^2? If so what is the reason and the context of your decision?
I'll be using both, but I'm more likely to need confidence intervals on sigma since I care most about the expected mean of the sampled process. But then I'll also be calculating probabilities based on the sample parameter, and those use sigma^2.

BTW, here's a thread on this very forum with the common MLE and proof that the sigma^2 estimator is unbiased. But it I'm coming at this convinced that it is biased for sigma and small N -- do I need to post Monte Carlo simulations to demonstrate that, or is "unbiased" used loosely in statistics to exclude small samples? Or could I have a confidence interval problem? I wish I could make sense of this....

8. ## Re: Rayleigh estimator and confidence intervals for small N

Originally Posted by dbooksta
Please have some patience with me -- I managed to get a B.S. in Math without covering any formal statistics:

The MLE I give is an estimator of the Rayleigh parameter sigma. I assume the source of the bias is analogous to the bias in standard deviation estimators resulting from the presence of an exponent in the sample sum. (In fact, the Rayleigh distribution has relationships to other common distributions, so I am hoping no new ground needs to be broken to answer my question!)

Checking MOM: The distribution only has one parameter, so MOM only has us looking at the first moment, [TEX]m_1 = \sigma \sqrt{\frac{\pi}{2}}[\TEX], or [TEX]\sigma = m_1 \sqrt{\frac{2}{\pi}}[\TEX]: a constant times the sample mean, which I guess tells us once again that the bias is a result of the concavity of the $\displaystyle \sqrt{n}$ in the estimator for sigma. Have I used MOM correctly?

Again, given the similarities, I wouldn't be surprised to see $\displaystyle c_4$ show up here, but I can't make the connection.

P.S. Any hints on why the two Latex expressions in the third paragraph aren't rendering?
Tex correctly rendered should be: $\displaystyle m_{1}=\sigma \sqrt{\frac{\pi}{2}}$, or $\displaystyle \sigma = m_1 \sqrt{\frac{2}{\pi}}$

As a FYI, I have no idea what you are talking about

9. ## Re: Rayleigh estimator and confidence intervals for small N

I don't know about the Rayleigh, but for the Normal distribution there are un-biased estimators and they are super complicated (using a tonne of Gamma functions).

I might suggest you look into that and see if you can adapt the solution in terms of the Rayleigh distributions sigma.

You could also use the Monte-Carlo distribution as well with regards to verifying any analytical solution you may get.

10. ## Re: Rayleigh estimator and confidence intervals for small N

Based on Monte-Carlo I can say that $\displaystyle c_4^2$ corrects sample parameters to less than 1% error for N > 10, but starting with N = 2 it has the following errors: 5.3%, 2.5%, 1.4%, 0.9%.

Am I correct in assuming that the "correct" correction factor would have only rounding error for small N?

Obviously I can use the Monte-Carlo correction factors in practice, but I'm curious to see the analytic estimator correction (even if it's as difficult to use as $\displaystyle c_4$). I don't think I have the capacity to derive that myself, but I'd put up a small "bounty" for the satisfaction of seeing one. Is there anyplace I might post such a challenge and reward for the consideration of "real" statisticians?

11. ## Re: Rayleigh estimator and confidence intervals for small N

I would try talkstats forums or stackexchange forums: you get researchers on there so you might have luck with it there.