Validity of content-based techniques for credibility assessment—How telling is an extended meta-analysis taking research bias into account?

19Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Content-based techniques for credibility assessment (Criteria-Based Content Analysis [CBCA], Reality Monitoring [RM]) have been shown to distinguish between experience-based and fabricated statements in previous meta-analyses. New simulations raised the question whether these results are reliable revealing that using meta-analytic methods on biased datasets lead to false-positive rates of up to 100%. By assessing the performance of and applying different bias-correcting meta-analytic methods on a set of 71 studies we aimed for more precise effect size estimates. According to the sole bias-correcting meta-analytic method that performed well under a priori specified boundary conditions, CBCA and RM distinguished between experience-based and fabricated statements. However, great heterogeneity limited precise point estimation (i.e., moderate to large effects). In contrast, Scientific Content Analysis (SCAN)—another content-based technique tested—failed to discriminate between truth and lies. It is discussed how the gap between research on and forensic application of content-based credibility assessment may be narrowed.

Cite

CITATION STYLE

APA

Oberlader, V. A., Quinten, L., Banse, R., Volbert, R., Schmidt, A. F., & Schönbrodt, F. D. (2021). Validity of content-based techniques for credibility assessment—How telling is an extended meta-analysis taking research bias into account? Applied Cognitive Psychology, 35(2), 393–410. https://doi.org/10.1002/acp.3776

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free