I have a terrific one.
BUT I'm trying to do that with my graduate student.
My school requires me to work through a half-year final project. I've chosen the nature of random numbers and random number generators as the loose topic of the project. Now, I've realized that I need a clear main question or a clear idea of what I want to find out in my project.
I can think of:
* writing a paper that brings order to the jungle of random number generators, that shows what's available on the market and shows which generators are good, which aren't and why.
* testing which requirements the parameters of simpler generators need to meet and why.
* finding out why checksums produce perfectly random outputs.
You see, I just don't have a lot of ideas of what experiments I could perform concerning random numbers and random number generators.
I just feel like I need results that are new to the world. I know that this isn't the goal of this project, but regardless I'm not happy with all my ideas.
My school level is comparable to high school and thus my mathematical level isn't too high, but the project isn't such a big deal either. So please don't tell me to choose another topic.
I'd be very happy, if you could suggest some ideas for my project.
Thanks a lot
I have one I made my undergraduates do last year.
It was in Wackerly's book but it's been removed.
It was an excellent example on what the confidence coefficient means in an confidence interval.
It shows a lot of interesting stuff.
Have you covered confidence intervals yet???
The point is to show that the coverage of the parameter
is approximately 1-alpha, say .95.
I can walk you through this if you're interested.
It covers generating uniform rvs, then transforming them.
Then applying the central limit theorem, then the strong law of large numbers.
It's really is neat.
It really isn't that complicated.
We don't even need to tranform the data.
So, have you covered confidence intervals?
Here's one you can do, if you haven't covered confidence intervals.
You can show how the sample proportion converges to the probability of a Bernouli.
Just count all of the U(0,1) rvs that are between 0 and .5
Take that and divide it by the sample size.
It should (with prob 1) converge to .5, the mean.
So do a running count.
p(hat)_n not phat is the proportion of numbers between 0 and .5 for the
first n terms.
Keep doing that.
Watch as the sequence approaches .5.
Likewise you can count all the ones that are in (0,p) and see how that converges to p, for all 0<p<1.
Take a look at the first 20, 50, 100, 200... terms and see how we are approaching p as n increases.
First, let me thank you for your answers. I haven't covered statistics at all. Our curriculum seems to only cover algebra and analysis. However, I think I'm gonna have to look at confidence intervals anyway in the course of my project.
Right now, I'm concerned with the following questions:
* Why do all people only take the least significant bit if they are converting random integers into a binary sequence? I mean, if the integers are truly random, their bit-representation should be truly random as well. There are as many numbers dividable by 2 than there are numbers that are larger than the half of the maximum value possible for each number (least and most significant bit). Am I here completely wrong?
* If I have a random number generator that produces integers and the integers are what I want to use in the end (not the bits). May there be flaws in the generated numbers that can't be detected if I run statistical tests with the binary sequence that is derived from the numbers (assuming that I take all bits of each number and attach it to the end of the sequence I already got)?
* Assuming 2 is wrong and I only need a test suite for binary inputs, what would you recommend me, NIST's Statistical Test Suite or the Rabbit or Alphabit battery from Ecuyer's TestU01 package?
Thanks for reading this bunch of text and in advance for your answers.