Question asked in Live Chat (and possibly of interest to a wider audience):

Why did the .05 significance level emerge as the de facto standard for social science research?

---------------------------------------------------------------------------

An answer is found in Sifting the evidence—what's wrong with significance tests?:

"Fisher saw the P value as an index measuring the strength of evidence against the null hypothesis (in our example, the hypothesis that the drug does not affect survival rates). He advocated P<0.05 (5% significance) as a standard level for concluding that there is evidence against the hypothesis tested, though not as an absolute rule. “If P is between 0.1 and 0.9 there is certainly no reason to suspect the hypothesis tested. If it is below 0.02 it is strongly indicated that the hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at 0.05. . . .”9 Importantly, Fisher argued strongly that interpretation of the P value was ultimately for the researcher. For example, a P value of around 0.05 might lead to neither belief nor disbelief in the null hypothesis but to a decision to perform another experiment."

9. Fisher, RA. Statistical methods for research workers. London: Oliver and Boyd; 1950. p. 80.