ACRyLIQ: Leveraging DBpedia for adaptive crowdsourcing in linked data quality assessment

8Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Crowdsourcing has emerged as a powerful paradigm for quality assessment and improvement of Linked Data. A major challenge of employing crowdsourcing, for quality assessment in Linked Data, is the cold-start problem: how to estimate the reliability of crowd workers and assign the most reliable workers to tasks? We address this challenge by proposing a novel approach for generating test questions from DBpedia based on the topics associated with quality assessment tasks. These test questions are used to estimate the reliability of the new workers. Subsequently, the tasks are dynamically assigned to reliable workers to help improve the accuracy of collected responses. Our proposed approach, ACRyLIQ, is evaluated using workers hired from Amazon Mechanical Turk, on two real-world Linked Data datasets. We validate the proposed approach in terms of accuracy and compare it against the baseline approach of reliability estimate using gold-standard task. The results demonstrate that our proposed approach achieves high accuracy without using gold-standard task.

Cite

CITATION STYLE

APA

ul Hassan, U., Zaveri, A., Marx, E., Curry, E., & Lehmann, J. (2016). ACRyLIQ: Leveraging DBpedia for adaptive crowdsourcing in linked data quality assessment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10024 LNAI, pp. 681–696). Springer Verlag. https://doi.org/10.1007/978-3-319-49004-5_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free