Aligned Weight Regularizers for Pruning Pretrained Neural Networks

1Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

While various avenues of research have been explored for iterative pruning, little is known what effect pruning has on zero-shot test performance and its potential implications on the choice of pruning criteria. This pruning setup is particularly important for cross-lingual models that implicitly learn alignment between language representations during pretraining, which if distorted via pruning, not only leads to poorer performance on language data used for retraining but also on zero-shot languages that are evaluated. In this work, we show that there is a clear performance discrepancy in magnitude-based pruning when comparing standard supervised learning to the zero-shot setting. From this finding, we propose two weight regularizers that aim to maximize the alignment between units of pruned and unpruned networks to mitigate alignment distortion in pruned cross-lingual models and perform well for both non zero-shot and zero-shot settings. We provide experimental results on cross-lingual tasks for the zero-shot setting using XLM-RoBERTaBase, where we also find that pruning has varying degrees of representational degradation depending on the language corresponding to the zero-shot test set. This is also the first study that focuses on cross-lingual language model compression.

Cite

CITATION STYLE

APA

O’ Neill, J., Dutta, S., & Assem, H. (2022). Aligned Weight Regularizers for Pruning Pretrained Neural Networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3391–3401). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.267

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free