Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models

0Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The pruning objective has recently extended beyond accuracy and sparsity to robustness in language models. Despite this, existing methods struggle to enhance robustness against adversarial attacks when continually increasing model sparsity and require a retraining process. As humans step into the era of large language models, these issues become increasingly prominent. This paper proposes that the robustness of language models is proportional to the extent of pre-trained knowledge they encompass. Accordingly, we introduce a post-training pruning strategy designed to faithfully replicate the embedding space and feature space of dense language models, aiming to conserve more pretrained knowledge during the pruning process. In this setup, each layer's reconstruction error not only originates from itself but also includes cumulative error from preceding layers, followed by an adaptive rectification. Compared to other state-of-art baselines, our approach demonstrates a superior balance between accuracy, sparsity, robustness, and pruning cost with BERT on datasets SST2, IMDB, and AGNews, marking a significant stride towards robust pruning in language models.

Cite

CITATION STYLE

APA

Li, J., Lei, Q., Cheng, W., & Xu, D. (2023). Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1229–1247). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.79

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free