On the Impact of Predicate Complexity in Crowdsourced Classification Tasks

4Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper explores and offers guidance on a specific and relevant problem in task design for crowdsourcing: how to formulate a complex question used to classify a set of items. In micro-task markets, classification is still among the most popular tasks. We situate our work in the context of information retrieval and multi-predicate classification, i.e., classifying a set of items based on a set of conditions. Our experiments cover a wide range of tasks and domains, and also consider crowd workers alone and in tandem with machine learning classifiers. We provide empirical evidence into how the resulting classification performance is affected by different predicate formulation strategies, emphasizing the importance of predicate formulation as a task design dimension in crowdsourcing.

Cite

CITATION STYLE

APA

Ramírez, J., Baez, M., Casati, F., Cernuzzi, L., Benatallah, B., Taran, E. A., & Malanina, V. A. (2021). On the Impact of Predicate Complexity in Crowdsourced Classification Tasks. In WSDM 2021 - Proceedings of the 14th ACM International Conference on Web Search and Data Mining (pp. 67–75). Association for Computing Machinery, Inc. https://doi.org/10.1145/3437963.3441831

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free