Making Machine Learning Forget

6Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning models often overfit to the training data and do not learn general patterns like humans do. This allows an attacker to learn private membership or attributes about the training data, simply by having access to the machine learning model. We argue that this vulnerability of current machine learning models makes them indirect stores of the personal data used for training and therefore, corresponding data protection regulations must apply to machine learning models as well. In this position paper, we specifically analyze how the “right-to-be-forgotten” provided by the European Union General Data Protection Regulation can be implemented on current machine learning models and which techniques can be used to build future models that can forget. This document also serves as a call-to-action for researchers and policy-makers to identify other technologies that can be used for this purpose.

Cite

CITATION STYLE

APA

Shintre, S., Roundy, K. A., & Dhaliwal, J. (2019). Making Machine Learning Forget. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11498 LNCS, pp. 72–83). Springer Verlag. https://doi.org/10.1007/978-3-030-21752-5_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free