This paper describes an approach to improving the reliability of a crowdsourced labeling task for which there is no objective right answer. Our approach focuses on three contingent elements of the labeling task: data quality, worker reliability, and task design. We describe how we developed and applied this framework to the task of labeling tweets according to their interestingness. We use in-task CAPTCHAs to identify unreliable workers, and measure inter-rater agreement to decide whether subtasks have objective or merely subjective answers.
CITATION STYLE
Alonso, O., Marshall, C. C., & Najork, M. (2013). A Human-Centered Framework for Ensuring Reliability on Crowdsourced Labeling Tasks. In Proceedings of the 1st AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2013 (pp. 2–3). AAAI Press. https://doi.org/10.1609/hcomp.v1i1.13097
Mendeley helps you to discover research relevant for your work.