This paper addresses the ethical concerns arising from the use of unauthorized public data in deep learning models and proposes a novel solution. Specifically, building on the work of Huang et al. (2021), we extend their bilevel optimization approach to generate unlearnable text using a gradient-based search technique. However, although effective, this approach faces practical limitations, including the requirement of batches of instances and model architecture knowledge that is not readily accessible to ordinary users with limited access to their own data. Furthermore, even with semantic-preserving constraints, unlearnable noise can alter the text's semantics. To address these challenges, we extract simple patterns from unlearnable text produced by bilevel optimization and demonstrate that the data remains unlearnable for unknown models. Additionally, these patterns are not instanceor dataset-specific, allowing users to readily apply them to text classification and questionanswering tasks, even if only a small proportion of users implement them on their public content. We also open-source codes to generate unlearnable text and assess unlearnable noise to benefit the public and future studies.
CITATION STYLE
Li, X., Liu, M., & Gao, S. (2023). Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 249–259). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.trustnlp-1.22
Mendeley helps you to discover research relevant for your work.