Originally Posted by

**B_Miner** I want to understand better what a standard error really is (I read conflicting things). Can anyone help me to get it?

Is it correct to say a standard error is the standard deviation of the sampling distribution of a statistic? By sampling distribution I mean a statistic (a mean or a regression coefficient for example) is computed based on a sample and is thus a random variable. That random variable has a distribution. That distribution is called the sampling distribution. The standard deviation of that distribution is the standard error. Am I correct??

Is a standard error always used as a component in a confidence interval?

If you are using statistical software and get a "standard error" listed with the point estimate for a regression coefficient lets say, can you multiply it by 1.96 and construct a confidence interval or does it depend on if there are better formulas for the CI?