The reporting of statistical significance in scientific journals

30Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.

Abstract

Scientific journals in most empirical disciplines have regulations about how authors should report the precision of their estimates of model parameters and other model elements. Some journals that overlap fully or partly with the field of demography demand as a strict prerequisite for publication that a p-value, a confidence interval, or a standard deviation accompany any parameter estimate. I feel that this rule is sometimes applied in an overly mechanical manner. Standard deviations and p-values produced routinely by general-purpose software are too often taken at face value and included without questioning, and features that have too high a p-value or too large a standard deviation are too easily disregarded as being without interest because they appear not to be statistically significant. In my opinion authors should be discouraged from adhering to this practice, and flexibility rather than rigidity should be encouraged in the reporting of statistical significance. One should also encourage thoughtful rather than mechanical use of p-values, standard deviations, confidence intervals, and the like. Here is why: When writing for Epidemiology, you can ... enhance your prospects if you omit tests of statistical significance.... [Elvery worthwhile journal will accept papers that omit them entirely. In Epidemiology, we do not publish them at all. Not only do we eschew publishing claims of the presence or absence of statistical significance, we discourage the use of this type of thinking in the data analysis ... . We also would like to see the interpretation of a study based not on statistical significance, or lack of it, for one or more study variables, but rather on careful quantitative consideration of the data in light of competing explanations for the findings. For example, we prefer a researcher to consider whether the magnitude of an estimated effect could be readily explained by uncontrolled confounding or selection biases, rather than simply to offer the uninspired interpretation that the estimated effect is significant, as if neither chance nor bias could then account for the findings.... Many data analysts appear to remain oblivious to the qualitative nature of significance testing. Although calculations based on mountains of valuable quantitative information may go into it, statistical significance is itself only a dicbotomous indicator. As it has only two values, 'significant' or 'not significant', it cannot convey much useful information. Even worse, those two values often signal just the wrong interpretation. These misleading signals occur when a trivial effect is found to be 'significant', as often happens in large studies, or when a strong relation is found 'nonsignificant', as often happens in small studies. © 2008 Hoem.

Cite

CITATION STYLE

APA

Hoem, J. (2008). The reporting of statistical significance in scientific journals. Demographic Research, 18, 437–442. https://doi.org/10.4054/DemRes.2008.18.15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free