Its hard to know what went wrong without giving some specific details of what you did (including in the Minitab environment).
So, I have a MiniTab assignment due this Friday. I have successfully printed out 100 columns of 500 rows of sampling into a "One-Sample T". I was asked to find the expected value of a random number 0-12. (6) I was then asked to find how many of the intervals contained my "expected mu" data. I recieved 91 intervals in my random data containing this value. I am now asked to calculate how many intervals I would expect to contain this "mu". I used a Sample Side Determination test and received 100, but I am doubting that this answer is correct. Any help would be much appreciated, thanks!
According to the assignment you should have 100 individual sample distributions each with a mean and standard errors of the mean.
Each of these will have their own sample means that will lie either inside or outside the 90% interval.
We expect that according to frequentist statistical theory, 90% of the intervals will contain the true mean.
You need to use the software to tell you whether the sample mean is in the interval dictated by the population confidence interval (corresponding to a 90% region) for each of the 100 sample distributions.
Again according to standard statistical theoretic assumptions, we expect that 90% of the intervals will contain the true mean.
Okay, so I found that a confidence level is the probability that the confidence interval actually does contain the population parameter [mu=6 in this case]. So, this means that, in theory, according to the question, I can expect 90% of my total intervals to contain my parameter mu?
And for number 1, the parameter is a probability p=0.6. So, the answer to part c would be that you could expect 92% of the total number of intervals to contain this p value?