Crowdsourced PAC Learning under Classification Noise

13Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we analyze PAC learnability from labels produced by crowdsourcing. In our setting, unlabeled examples are drawn from a distribution and labels are crowd sourced from workers who operate under classification noise, each with their own noise parameter. We develop an end-to-end crowd sourced PAC learning algorithm that takes unlabelled data points as input and outputs a trained classifier. Our three step algorithm incorporates majority voting, pure-exploration bandits, and noisy-PAC learning. We prove several guarantee son the number of tasks labeled by workers for PAC learning in this setting and show that our algorithm improves upon the baseline by reducing the total number of tasks given to workers. We demonstrate the robustness of our algorithm by exploring its application to additional realistic crowd sourcing settings.

Cite

CITATION STYLE

APA

Heinecke, S., & Reyzin, L. (2019). Crowdsourced PAC Learning under Classification Noise. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 7, pp. 42–49). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/hcomp.v7i1.5279

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free