Over the past few decades, psychology and its cognate disciplines have undergone substantial scientific reform, ranging from advances in statisticalmethodology to significant changes in academic norms. One aspect of experimental design that has received comparatively little attention is incentivization, i.e., theway that participants are rewarded and incentivizedmonetarily for their participation in experiments and surveys.While incentivecompatible designs are the normin disciplines like economics, themajority of studies in psychology and experimental philosophy are constructed such that individuals incentives to maximize their payoffs in many cases stand opposed to their incentives to state their true preferences honestly. This is in part because the subject matter is often self-report data about subjective topics, and the sample is drawn from online platforms like Prolific or MTurk where many participants are out to make a quick buck. One mechanism that allows for the introduction of an incentive-compatible design in such circumstances is the Bayesian Truth Serum (BTS; Prelec, 2004), which rewards participants based on how surprisingly common their answers are. Recently, Schoenegger (2021) applied this mechanism in the context of Likert-scale self-reports, finding that the introduction of this mechanism significantly altered response behavior. In this registered report, we further investigate thismechanism by (1) attempting to directly replicate the previous result and (2) analyzing if the Bayesian Truth Serum s effect is distinct from the effects of its constituent parts (increase in expected earnings and addition of prediction tasks). We fail to find significant differences in response behavior between participants who were simply paid for completing the study and participantswhowere incentivized with the BTS. Per our pre-registration,we regard this as evidence in favor of a null effect of up to V = .1anda failure to replicate but reserve judgment as to whether the BTS mechanism should be adopted in social science fields that rely heavily on Likert-scale items reporting subjective data, seeing that smaller effect sizesmight still be of practical interest and results may differ for items different fromtheones we studied. Further, we provide weak evidence that the prediction task itself influences response distributions and that this task s effect is distinct from an increase in expected earnings, suggesting a complex interaction between the BTS constituent parts and its truth-telling instructions.
CITATION STYLE
Schoenegger, P., & Verheyen, S. (2022). Taking a Closer Look at the Bayesian Truth Serum A Registered Report. Experimental Psychology, 69(4), 226–239. https://doi.org/10.1027/1618-3169/a000558
Mendeley helps you to discover research relevant for your work.