Auditing data provenance in text-generation models

184Citations
Citations of this article
159Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To help enforce data-protection regulations such as GDPR and detect unauthorized uses of personal data, we develop a new model auditing technique that helps users check if their data was used to train a machine learning model. We focus on auditing deep-learning models that generate natural-language text, including word prediction and dialog generation. These models are at the core of popular online services and are often trained on personal data such as users' messages, searches, chats, and comments. We design and evaluate a black-box auditing method that can detect, with very few queries to a model, if a particular user's texts were used to train it (among thousands of other users). We empirically show that our method can successfully audit well-generalized models that are not overfitted to the training data. We also analyze how text-generation models memorize word sequences and explain why this memorization makes them amenable to auditing.

Cite

CITATION STYLE

APA

Song, C., & Shmatikov, V. (2019). Auditing data provenance in text-generation models. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 196–206). Association for Computing Machinery. https://doi.org/10.1145/3292500.3330885

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free