Training Data Extraction From Pre-trained Language Models: A Survey

4Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

As the deployment of pre-trained language models (PLMs) expands, pressing security concerns have arisen regarding the potential for malicious extraction of training data, posing a threat to data privacy. This study is the first to provide a comprehensive survey of training data extraction from PLMs. Our review covers more than 100 key papers in fields such as natural language processing and security. First, preliminary knowledge is recapped and a taxonomy of various definitions of memorization is presented. The approaches for attack and defense are then systemized. Furthermore, the empirical findings of several quantitative studies are highlighted. Finally, future research directions based on this review are suggested.

Cite

CITATION STYLE

APA

Ishihara, S. (2023). Training Data Extraction From Pre-trained Language Models: A Survey. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 260–275). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.trustnlp-1.23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free