# Estimate for population variance lognormal

Printable View

• Jan 11th 2011, 01:20 AM
sharpe
Estimate for population variance lognormal
If I have a small set of data - say 10 points. I am assuming the data has a lognormal distribution.

How can I estimate the population variance?

If I take an estimate from the method of moments or maximum likelihood approach is it reasonable to say that we are under estimating the population variance, given so few data points?

Could I ask a practical approach that might be used to get a reasonable estimate for the population variance that is on the prudent side?

I did see some things here: Unbiased estimation of standard deviation - Wikipedia, the free encyclopedia
which looked a little complex for a lognormal

Many thanks for any comments
• Jan 11th 2011, 05:09 AM
CaptainBlack
Quote:

Originally Posted by sharpe
If I have a small set of data - say 10 points. I am assuming the data has a lognormal distribution.

How can I estimate the population variance?

If I take an estimate from the method of moments or maximum likelihood approach is it reasonable to say that we are under estimating the population variance, given so few data points?

Could I ask a practical approach that might be used to get a reasonable estimate for the population variance that is on the prudent side?

I did see some things here: Unbiased estimation of standard deviation - Wikipedia, the free encyclopedia
which looked a little complex for a lognormal

Many thanks for any comments

This is probably going to be of little or no use to you and will probably require some work to make it usable, but what I would consider doing is:

1. Obtain rough estimates of the mean and variance using the sample by any reasonable method.

2. Now write a Monte-Carlo routine that given the mean and variance of a log-normal generates a large sample of samples of size 10 and calculate the (same) rough estimates of the mean and variance for each sample of size 10, which when averaged give us an estimate of the biases and so a new estimate of the mean and variance. Now loop around the MC routine (starting with our initrial rough extimates) adjusting until the estimates of the mean and variance are not changing much..

The resulting estimates (of the mean and variance) will be (approximately) unbiased

Handwaving explanation: What we are doing is assuming some guess at mean and variance for the log-noormal and any old estimate using Monte-Carlo to estimate the bias, using this to correct the bias in our estimate to give us our new guess at the mean and variance of the log-normal and repeating until we have acceptable convergence (convergence will never be perfect because of the MC element of this process

CB
• Jan 11th 2011, 06:11 AM
sharpe
I like the sound of this idea - it sounds practical. do you know if any research has been done along these lines? i would have thought you could probably get some kind of factor to raise the s.d. by given the sample size and assumed underlying distribution.
• Jan 11th 2011, 06:37 AM
CaptainBlack
Quote:

Originally Posted by sharpe
I like the sound of this idea - it sounds practical. do you know if any research has been done along these lines? i would have thought you could probably get some kind of factor to raise the s.d. by given the sample size and assumed underlying distribution.

This is a semi-bootstrap bias correction technique (semi-bootstrap because we are going to assume a log-normal distribution). I have not seen any litrature on this exactly I tend to generate these as needed, but IIRC this may be the jackknife. You could look at the litrature on resampling techniques, bootstrap and jackknife.

CB
• Jan 11th 2011, 10:54 AM
sharpe
I will also look at confidence interval on sigma. thanks a lot for your pointers that is very helpful.