Position Information in Transformers: An Overview

129Citations
Citations of this article
214Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Transformers are arguably the main workhorse in recent natural language processing research. By definition, a Transformer is invariant with respect to reordering of the input. However, language is inherently sequential and word order is essential to the semantics and syntax of an utterance. In this article, we provide an overview and theoretical comparison of existing methods to incorporate position information into Transformer models. The objectives of this survey are to (1) showcase that position information in Transformer is a vibrant and extensive research area; (2) enable the reader to compare existing methods by providing a unified notation and systematization of different approaches along important model dimensions; (3) indicate what characteristics of an application should be taken into account when selecting a position encoding; and (4) provide stimuli for future research.

Cite

CITATION STYLE

APA

Dufter, P., Schmitt, M., & Schütze, H. (2022). Position Information in Transformers: An Overview. Computational Linguistics, 48(3), 733–763. https://doi.org/10.1162/coli_a_00445

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free