Crowdsourcing has emerged as an effective paradigm for accomplishing various intelligent tasks at low costs. However, the labels provided by non-expert crowdsourcing labelers often appear various quality as labelers possess wide-ranging levels of competence. This raises the significant challenges of estimating the true answers for tasks and the reliability of the labelers. Of numerous approaches to estimating labeler quality, expectation-maximization (EM) is widely used by maximizing the likelihood estimates of labeler quality from the observed multiple labels. However, EM-based approaches are easily trapped into local optima. In this paper we use a weight vector to represent the quality (reliability) of corresponding labelers and then using differential evolution (DE) to search optimal weights for different labelers. The experimental results validate the effectiveness of the proposed approach.
CITATION STYLE
Qiu, C., Jiang, L., & Cai, Z. (2018). Using differential evolution to estimate labeler quality for crowdsourcing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11013 LNAI, pp. 165–173). Springer Verlag. https://doi.org/10.1007/978-3-319-97310-4_19
Mendeley helps you to discover research relevant for your work.