# Math Help - What the heck is the standard error?

1. ## What the heck is the standard error?

I want to understand better what a standard error really is (I read conflicting things). Can anyone help me to get it?

Is it correct to say a standard error is the standard deviation of the sampling distribution of a statistic? By sampling distribution I mean a statistic (a mean or a regression coefficient for example) is computed based on a sample and is thus a random variable. That random variable has a distribution. That distribution is called the sampling distribution. The standard deviation of that distribution is the standard error. Am I correct??

Is a standard error always used as a component in a confidence interval?

If you are using statistical software and get a "standard error" listed with the point estimate for a regression coefficient lets say, can you multiply it by 1.96 and construct a confidence interval or does it depend on if there are better formulas for the CI?

2. 1.96 is a N(0,1) percentile that gives you 95 precent coverage in your intervals, i.e.,
P(-1.96<Z<1.96)=.95. So, that's for a two-sided interval with probability .95.

Now the standard error is the estimate of the standard deviation.
For example, the variance of the sample mean, for i.i.d. rvs is... $V(\bar X)={\sigma^2\over n}$.
Hence the standard deviation of $\bar X$ is ${\sigma\over \sqrt n}$.
BUT we usually don't know $\sigma$, so we estimate the st deviation with this standard error.
Our estimate of $\sigma$ is S so the standard error of $\bar X$ is ${S\over \sqrt n}$ our standard error.

3. Originally Posted by B_Miner
I want to understand better what a standard error really is (I read conflicting things). Can anyone help me to get it?

Is it correct to say a standard error is the standard deviation of the sampling distribution of a statistic? By sampling distribution I mean a statistic (a mean or a regression coefficient for example) is computed based on a sample and is thus a random variable. That random variable has a distribution. That distribution is called the sampling distribution. The standard deviation of that distribution is the standard error. Am I correct??

Is a standard error always used as a component in a confidence interval?

If you are using statistical software and get a "standard error" listed with the point estimate for a regression coefficient lets say, can you multiply it by 1.96 and construct a confidence interval or does it depend on if there are better formulas for the CI?