SURF: Improving classifiers in production by learning from busy and noisy end users

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Supervised learning classifiers inevitably make mistakes in production, perhaps mis-labeling an email, or flagging an otherwise routine transaction as fraudulent. It is vital that the end users of such a system are provided with a means of relabeling data points that they deem to have been mislabeled. The classifier can then be retrained on the relabeled data points in the hope of performance improvement. To reduce noise in this feedback data, well known algorithms from the crowdsourcing literature can be employed. However, the feedback setting provides a new challenge: how do we know what to do in the case of user non-response? If a user provides us with no feedback on a label then it can be dangerous to assume they implicitly agree: a user can be busy, lazy, or no longer a user of the system! We show that conventional crowdsourcing algorithms struggle in this user feedback setting, and present a new algorithm, SURF, that can cope with this non-response ambiguity.

Cite

CITATION STYLE

APA

Lockhart, J., Assefa, S., Alajdad, A., Alexander, A., Balch, T., & Veloso, M. (2020). SURF: Improving classifiers in production by learning from busy and noisy end users. In ICAIF 2020 - 1st ACM International Conference on AI in Finance. Association for Computing Machinery, Inc. https://doi.org/10.1145/3383455.3422547

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free