Development and evaluation of quality control methods in a microtask crowdsourcing platform

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Open Crowdsourcing platforms like Amazon Mechanical Turk provide an attractive solution for process of high volume tasks with low costs. However problems of quality control is still of major interest. In this paper, we design a private crowdsourcing system, where we can devise methods for the quality control. For the quality control, we introduce four worker selection methods, each of which we call preprocessing filtering, real-time filtering, post processing filtering, and guess processing filtering. These methods include a novel approach, which utilizes a collaborative filtering technique in addition to a basic approach of initial training or gold standard data. For an use case, we have built a very large dictionary, which is necessary for Large Vocabulary Continuous Speech Recognition and Text-to-Speech. We show how the system yields high quality results for some difficult tasks of word extraction, part-of-speech tagging, and pronunciation prediction to build a large dictionary.

Cite

CITATION STYLE

APA

Ashikawa, M., Kawamura, T., & Ohsuga, A. (2014). Development and evaluation of quality control methods in a microtask crowdsourcing platform. Transactions of the Japanese Society for Artificial Intelligence, 29(6), 503–515. https://doi.org/10.1527/tjsai.29.503

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free