Quality assurance is one of the most important problems in crowdsourcing and human computation, and it has been extensively studied from various aspects. Typical approaches for quality assurance include unsupervised approaches such as introducing task redundancy (i.e., asking the same question to multiple workers and aggregating their answers) and supervised approaches such as using worker performance on past tasks or injecting qualification questions into tasks in order to estimate the worker performance. In this paper, we propose to utilize the worker performance as a global constraint for inferring the true answers. The existing semi-supervised approaches do not consider such use of qualification questions. We also propose to utilize the constraint as a regularizer combined with existing statistical aggregation methods. The experiments using heterogeneous multiple-choice questions demonstrate that the performance constraint not only has the power to estimate the ground truths when used by itself, but also boosts the existing aggregation methods when used as a regularizer.
CITATION STYLE
Li, J., Kawase, Y., Baba, Y., & Kashima, H. (2020). Performance as a constraint: An improved wisdom of crowds using performance regularization. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 1534–1541). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/213
Mendeley helps you to discover research relevant for your work.