Practical Peer Prediction for Peer Assessment

17Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

We provide an empirical analysis of peer prediction mechanisms, which reward participants for information in settings when there is no ground truth against which to score reports. We simulate the mechanisms on a dataset of three million peer assessments from the edX MOOC platform. We evaluate different mechanisms on score variability, which is connected to fairness, risk aversion, and participant learning. We also assess the magnitude of the incentives to invest effort, and study the effect of participant coordination on low-information signals. We find that the correlated agreement mechanism has lower variation in reward than other mechanisms. A concern is that the gain from exerting effort is relatively low across all mechanisms, due to frequent disagreement between peers. Our conclusions are relevant for crowdsourcing in education as well as other domains.

Cite

CITATION STYLE

APA

Shnayder, V., & Parkes, D. C. (2016). Practical Peer Prediction for Peer Assessment. In Proceedings of the 4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016 (pp. 199–208). AAAI Press. https://doi.org/10.1609/hcomp.v4i1.13285

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free