Comparing crowd-based, game-based, and machine-based approaches in initial query and query refinement tasks

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human computation techniques have demonstrated their ability to accomplish portions of tasks that machine-based techniques find difficult. Query refinement is a task that may benefit from human involvement. We conduct an experiment that evaluates the contributions of two user types: student participants and crowdworkers hired from an online labor market. Human participants are assigned to use one of two query interfaces: a traditional web-based interface or a game-based interface. We ask each group to manually construct queries to respond to TREC information needs and calculate their resulting recall and precision. Traditional web interface users are provided feedback on their initial queries and asked to use this information to reformulate their original queries. Game interface users are provided with instant scoring and ask to refine their queries based on their scores. We measure the resulting feedback-based improvement on each group and compare the results from human computation techniques to machine-based algorithms. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Harris, C. G., & Srinivasan, P. (2013). Comparing crowd-based, game-based, and machine-based approaches in initial query and query refinement tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7814 LNCS, pp. 495–506). https://doi.org/10.1007/978-3-642-36973-5_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free