Quantifying, and correcting for, the impact of questionable research practices on false discovery rates in psychological science

  • Kravitz D
  • Mitrof S
N/ACitations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Large-scale replication failures have shaken confidence in the social sciences, psychology in particular. Most researchers acknowledge the problem, yet there is widespread debate about the causes and solutions. Using “big data,” the current project demonstrates that unintended consequences of three common questionable research practices (retaining pilot data, adding data after checking for significance, and not publishing null findings) can explain the lion’s share of the replication failures. A massive dataset was randomized to create a true null effect between two conditions, and then these three questionable research practices were applied. They produced false discovery rates far greater than 5% (the generally accepted rate), and were strong enough to obscure, or even reverse, the direction of real effects. These demonstrations suggest that much of the replication crisis might be explained by simple, misguided experimental choices. This approach also produces empirically-based statistical corrections to account for these practices when they are unavoidable, providing a viable path forward.

Cite

CITATION STYLE

APA

Kravitz, D. J., & Mitrof, S. R. (2023). Quantifying, and correcting for, the impact of questionable research practices on false discovery rates in psychological science. Journal for Reproducibility in Neuroscience. https://doi.org/10.36850/jrn.2023.e44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free