So who won? Dynamic max discovery with the crowd

144Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider a crowdsourcing database system that may cleanse, populate, or filter its data by using human workers. Just like a conventional DB system, such a crowdsourcing DB system requires data manipulation functions such as select, aggregate, maximum, average, and so on, except that now it must rely on human operators (that for example compare two objects) with very different latency, cost and accuracy characteristics. In this paper, we focus on one such function, maximum, that finds the highest ranked object or tuple in a set. In particularm we study two problems: given a set of votes (pairwise comparisons among objects), how do we select the maximum? And how do we improve our estimate by requesting additional votes? We show that in a crowdsourcing DB system, the optimal solution to both problems is NP-Hard. We then provide heuristic functions to select the maximum given evidence, and to select additional votes. We experimentally evaluate our functions to highlight their strengths and weaknesses. © 2012 ACM.

Cite

CITATION STYLE

APA

Guo, S., Parameswaran, A., & Garcia-Molina, H. (2012). So who won? Dynamic max discovery with the crowd. In Proceedings of the ACM SIGMOD International Conference on Management of Data (pp. 385–396). https://doi.org/10.1145/2213836.2213880

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free