Adaptive attention span in transformers

113Citations
Citations of this article
592Readers
Mendeley users who have this article in their library.

Abstract

We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters.

Cite

CITATION STYLE

APA

Sukhbaatar, S., Grave, E., Bojanowski, P., & Joulin, A. (2020). Adaptive attention span in transformers. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 331–335). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1032

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free