Analyzing Information Leakage of Updates to Natural Language Models

64Citations
Citations of this article
107Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To continuously improve quality and reflect changes in data, machine learning applications have to regularly retrain and update their core models. We show that a differential analysis of language model snapshots before and after an update can reveal a surprising amount of detailed information about changes in the training data. We propose two new metrics - -differential score and differential rank - -for analyzing the leakage due to updates of natural language models. We perform leakage analysis using these metrics across models trained on several different datasets using different methods and configurations. We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect.

Cite

CITATION STYLE

APA

Zanella-Béguelin, S., Wutschitz, L., Tople, S., Rühle, V., Paverd, A., Ohrimenko, O., … Brockschmidt, M. (2020). Analyzing Information Leakage of Updates to Natural Language Models. In Proceedings of the ACM Conference on Computer and Communications Security (pp. 363–375). Association for Computing Machinery. https://doi.org/10.1145/3372297.3417880

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free