Publication Bias in Psychology : A Diagnosis Based on the Correlation between Effect Size and Sample Size

  • Fritz A
  • Scherndl T
  • Ku A
N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Background: The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate Methods: publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results: We found a negative correlation of r=2.45 [95% CI: 2.53; 2.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that Conclusion: neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

Cite

CITATION STYLE

APA

Fritz, A., Scherndl, T., & Ku, A. (2014). Publication Bias in Psychology : A Diagnosis Based on the Correlation between Effect Size and Sample Size. PLoS ONE, 9(9), 1–8. https://doi.org/10.1371/journal.pone.0105825

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free