A Human-Centered Framework for Ensuring Reliability on Crowdsourced Labeling Tasks

7Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

This paper describes an approach to improving the reliability of a crowdsourced labeling task for which there is no objective right answer. Our approach focuses on three contingent elements of the labeling task: data quality, worker reliability, and task design. We describe how we developed and applied this framework to the task of labeling tweets according to their interestingness. We use in-task CAPTCHAs to identify unreliable workers, and measure inter-rater agreement to decide whether subtasks have objective or merely subjective answers.

Cite

CITATION STYLE

APA

Alonso, O., Marshall, C. C., & Najork, M. (2013). A Human-Centered Framework for Ensuring Reliability on Crowdsourced Labeling Tasks. In Proceedings of the 1st AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2013 (pp. 2–3). AAAI Press. https://doi.org/10.1609/hcomp.v1i1.13097

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free