Obtaining rephrased microtask questions from crowds

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a novel method for obtaining and ranking rephrased questions from crowds, to be used as a part of instructions in microtask-based crowdsourcing. Using our method, we are able to obtain questions that differ in expression yet have the same semantics with respect to the crowdsourcing task. This is done by generating tasks that give a hint and elicit instructions from workers. We conduct experiments with data used for a real set of gold standard questions submitted to a commercial crowdsourcing platform and compared the results with those of a direct-rewrite method. The results show that extracted questions are semantically ranked at high precision and we identify cases where each method is effective.

Cite

CITATION STYLE

APA

Hayashi, R., Shimizu, N., & Morishima, A. (2016). Obtaining rephrased microtask questions from crowds. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10047 LNCS, pp. 323–336). Springer Verlag. https://doi.org/10.1007/978-3-319-47874-6_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free