Confidence intervals and assumption of normality
I am analysing some data and calculating the 95% confidence intervals with the 'usual' approach of Xbar +/- t * s/sqrt(n).
However I have read several times that one of the assumptions using this method is that the distribution has to be normal (or close to normal). My original data is far from normal and is noticably skewed right. HOWEVER my understanding is that providing my sample size is >30 (in most cases it is) then the sampling distribution will be normal. Since it is the sampling distribution I am using to calculate the confidence interval then the original sample distribution does not have to be normal.
Does anyone know if this is correct? I.e. that as long as n>30 it does not matter what shape the original distribution is since my sampling distribution will be normal (or close to normal) so I can use the above method to calculate the 95% CI's?