High-throughput crowdsourcing mechanisms for complex tasks

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Crowdsourcing is popular for large-scale data processing endeav ors that require hu man input. However, working with a large community of users raises new chal lenges. In particular, both possible misjudgment and disho nesty threaten the quality of the results. Common countermeasures are based on redundancy, giving way to a tradeoff between result quality and throughput. Ideally, measures should (1) maintain high throughput and (2) ensure high result quality at the same time. Existing work on crowdsourcing mostly focuses on result quality, paying little attention to throughput or even to that tradeoff. One reason is that the number of tasks (individual atomic units of work) is usually small. A further problem is that the tasks users work on are small as well. In consequence, existing result-improvement mecha nisms do not scale to the number or complexity of tasks that arise, for instance, in proofreading and processing of digitized legacy literature. This paper proposes novel result-improvement mechanisms that (1) are independent of the size and complexity of tasks and (2) allow to trade result quality for throughput to a significant extent. Both mathematical analyses and extensive simulations show the effectiveness of the proposed mechanisms. © 2011 Springer-Verlag.

Author supplied keywords

Cite

CITATION STYLE

APA

Sautter, G., & Böhm, K. (2011). High-throughput crowdsourcing mechanisms for complex tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6984 LNCS, pp. 240–254). https://doi.org/10.1007/978-3-642-24704-0_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free