Outlier Dimensions that Disrupt Transformers are Driven by Frequency

26Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While Transformer-based language models are generally very robust to pruning, there is the recently discovered outlier phenomenon: disabling only 48 out of 110M parameters in BERT-base drops its performance by nearly 30% on MNLI. We replicate the original evidence for the outlier phenomenon and we link it to the geometry of the embedding space. We find that in both BERT and RoBERTa the magnitude of hidden state coefficients corresponding to outlier dimensions correlates with the frequency of encoded tokens in pre-training data, and it also contributes to the “vertical” self-attention pattern enabling the model to focus on the special tokens. This explains the drop in performance from disabling the outliers, and it suggests that to decrease anisotropicity in future models we need pre-training schemas that would better take into account the skewed token distributions.

Cite

CITATION STYLE

APA

Puccetti, G., Rogers, A., Drozd, A., & Dell’Orletta, F. (2022). Outlier Dimensions that Disrupt Transformers are Driven by Frequency. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 1286–1304). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.528

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free