Aggregating Reliable Submissions in Crowdsourcing Systems

4Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Crowdsourcing is a cost-effective method that gathers crowd wisdom to solve machine-hard problems. In crowdsourcing systems, requesters post tasks for obtaining reliable solutions. Nevertheless, since workers have various expertise and knowledge background, they probably deliver low-quality and ambiguous submissions. A task aggregation scheme is generally employed in crowdsourcing systems, to deal with this problem. Existing methods mainly focus on structured submissions and also do not consider the cost incurred for completing a task. We exploit features of submissions to improve the task aggregation for proposing a method which is applicable to both structured and unstructured tasks. Moreover, existing probabilistic methods for answer aggregation are sensitive to sparsity. Our approach uses a generative probabilistic model that incorporates similarity in answers along with worker and task features. Thereafter, we present a method for minimizing the cost of tasks, that eventually leverages the quality of answers. We conduct experiments on empirical data that demonstrates the effectiveness of our method compared to state-of-the-art approaches.

Cite

CITATION STYLE

APA

Kurup, A. R., Sajeev, G. P., & Swaminathan, J. (2021). Aggregating Reliable Submissions in Crowdsourcing Systems. IEEE Access, 9, 153058–153071. https://doi.org/10.1109/ACCESS.2021.3127994

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free