Knowledge Enhanced Quality Estimation for Crowdsourcing

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Estimating the quality of answers is one of the challenges in crowdsourcing. The previous methods focus on the quality estimation for objective tasks, whereas subjective tasks, as a common type of crowdsourcing tasks, have not been well studied. In this paper, we focus on the quality estimation for subjective crowdsourcing tasks. Considering the high uncertainty of answers for subjective tasks, in this paper, we propose a background knowledge enhanced quality estimation method. More specifically, first we learn the distributed knowledge representation from knowledge graphs and text corpora by utilizing the multi-task learning framework. Then, we construct a pseudo-gold answer set for each task. Next, by comparing the provided answer with the derived pseudo-gold answer set, we calculate two different scores for each answer: 1) symbolic score, which measures the symbolic similarity and 2) embedding score, which indicates the embedding similarity. Finally, we get the final scores for each answer by combining these two scores. The extensive experiments on both universal and domain-specific crowdsourcing tasks show that our method can obtain better performance than other baselines.

Cite

CITATION STYLE

APA

Wang, S., Dang, D., Guo, Z., Chen, C., & Yu, W. (2019). Knowledge Enhanced Quality Estimation for Crowdsourcing. IEEE Access, 7, 106694–106704. https://doi.org/10.1109/ACCESS.2019.2932149

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free