Objectives Previous research has shown clear biases in the distribution of published p values, with an excess below the 0.05 threshold due to a combination of p-hacking and publication bias. We aimed to examine the bias for statistical significance using published confidence intervals. Design Observational study. Setting Papers published in Medline since 1976. Participants Over 968 000 confidence intervals extracted from abstracts and over 350 000 intervals extracted from the full-text. Outcome measures Cumulative distributions of lower and upper confidence interval limits for ratio estimates. Results We found an excess of statistically significant results with a glut of lower intervals just above one and upper intervals just below 1. These excesses have not improved in recent years. The excesses did not appear in a set of over 100 000 confidence intervals that were not subject to p-hacking or publication bias. Conclusions The huge excesses of published confidence intervals that are just below the statistically significant threshold are not statistically plausible. Large improvements in research practice are needed to provide more results that better reflect the truth.
CITATION STYLE
Barnett, A. G., & Wren, J. D. (2019). Examination of CIs in health and medical journals from 1976 to 2019: An observational study. BMJ Open, 9(11). https://doi.org/10.1136/bmjopen-2019-032506
Mendeley helps you to discover research relevant for your work.