Repulsive attention: Rethinking multi-head attention as Bayesian inference

13Citations
Citations of this article
121Readers
Mendeley users who have this article in their library.

Abstract

The neural attention mechanism plays an important role in many natural language processing applications. In particular, multi-head attention extends single-head attention by allowing a model to jointly attend information from different perspectives. However, without explicit constraining, multi-head attention may suffer from attention collapse, an issue that makes different heads extract similar attentive features, thus limiting the model's representation power. In this paper, for the first time, we provide a novel understanding of multi-head attention from a Bayesian perspective. Based on the recently developed particle-optimization sampling techniques, we propose a non-parametric approach that explicitly improves the repulsiveness in multi-head attention and consequently strengthens model's expressiveness. Remarkably, our Bayesian interpretation provides theoretical inspirations on the not-well-understood questions: why and how one uses multi-head attention. Extensive experiments on various attention models and applications demonstrate that the proposed repulsive attention can improve the learned feature diversity, leading to more informative representations with consistent performance improvement on multiple tasks.

Cite

CITATION STYLE

APA

An, B., Lyu, J., Wang, Z., Li, C., Hu, C., Tan, F., … Chen, C. (2020). Repulsive attention: Rethinking multi-head attention as Bayesian inference. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 236–255). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free