Confusing the Crowd: Task Instruction Quality on Amazon Mechanical Turk

35Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

Task instruction quality is widely presumed to affect outcomes, such as accuracy, throughput, trust, and worker satisfaction. Best practices guides written by experienced requesters share their advice about how to craft task interfaces. However, there is little evidence of how specific task design attributes affect actual outcomes. This paper presents a set of studies that expose the relationship between three sets of measures: (a) workers' perceptions of task quality, (b) adherence to popular best practices, and (c) actual outcomes when tasks are posted (including accuracy, throughput, trust, and worker satisfaction). These were investigated using collected task interfaces, along with a model task that we systematically mutated to test the effects of specific task design guidelines.

Cite

CITATION STYLE

APA

Wu, M. H., & Quinn, A. J. (2017). Confusing the Crowd: Task Instruction Quality on Amazon Mechanical Turk. In Proceedings of the 5th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2017 (pp. 206–215). AAAI Press. https://doi.org/10.1609/hcomp.v5i1.13317

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free