SHAPE: Shifted Absolute Position Embedding for Transformers

16Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

Position representation is crucial for building position-aware representations in Transformers. Existing position representations suffer from a lack of generalization to test data with unseen lengths or high computational cost. We investigate shifted absolute position embedding (SHAPE) to address both issues. The basic idea of SHAPE is to achieve shift invariance, which is a key property of recent successful position representations, by randomly shifting absolute positions during training. We demonstrate that SHAPE is empirically comparable to its counterpart while being simpler and faster.

Cite

CITATION STYLE

APA

Kiyono, S., Kobayashi, S., Suzuki, J., & Inui, K. (2021). SHAPE: Shifted Absolute Position Embedding for Transformers. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 3309–3321). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.266

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free