Regulations introduced General Data Protection Regulation (GDPR) in the EU or California Consumer Privacy Act (CCPA) in the US have included provisions on the right to be forgotten that mandates industry applications to remove data related to an individual from their systems. In several real world industry applications that use Machine Learning to build models on user data, such mandates require significant effort both in terms of data cleansing as well as model retraining while ensuring the models do not deteriorate in prediction quality due to removal of data. As a result, continuous removal of data and model retraining steps do not scale if these applications receive such requests at a very high frequency. Recently, a few researchers proposed the idea of Machine Unlearning to tackle this challenge. Despite the significant importance of this task, the area of Machine Unlearning is under-explored in Natural Language Processing (NLP) tasks. In this paper, we explore the Unlearning framework on various GLUE tasks (Wang et al., 2018), such as, QQP, SST and MNLI. We propose computationally efficient approaches (SISA-FC and SISA-A) to perform guaranteed Unlearning that provides significant reduction in terms of both memory (90-95%), time (100x) and space consumption (99%) in comparison to the baselines while having minimal impact on model performance.
CITATION STYLE
Kumar, V. B., Gangadharaiah, R., & Roth, D. (2023). Privacy Adhering Machine Un-learning in NLP. In IJCNLP-AACL 2023 - 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (pp. 268–277). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-ijcnlp.25
Mendeley helps you to discover research relevant for your work.