So I have these distributions of nano-particle sizes in lognormal distributions from a particle sizing program, and I was wondering how would I go about creating a confidence interval for where the true mean should be according to my samples.

The problem is that the program only gives me the following sample statistics (and not the equation for the graph)

Mean Volume Diameter: (center of gravity of the distribution)

SD – Standard Deviation, in microns—describes the width of the

measured particle size distribution.

Mz – Graphic Mean = (16th + 50th + 84th ) / 3

SDg – Graphic Standard Deviation = ((84th – 16th) / 4) + ((95th – 5th) / 6.6)

Kg – Graphic Kurtosis = (95th – 5th) / (2.44 * (75th – 25th))

Basically i synthesize the particles 3 times and i get 3 different distributions, and i'm trying to get a 90% confidence interval for that particular type of particle.

Originally I was planning to just use the mean and SD and T-distribution with df=2 to generate a 90% confidence interval for the particles, but my stats teacher said I that wasnt the right way of doing it, yet she doesn't know how to do it either (i'm in HS, So my background in Stats is just AP stats, multivar calc... don't know much else)

So can anyone take a crack at this problem? or just point me in the right direction?

How can I go about generating a confidence interval for the particle sizes? (or is that just not possible at all given my lack of other information?)