Segatron: Segment-Aware Transformer for Language Modeling and Understanding

12Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

Transformers are powerful for sequence modeling. Nearly all state-of-the-art language models and pre-trained language models are based on the Transformer architecture. However, it distinguishes sequential tokens only with the token position index. We hypothesize that better contextual representations can be generated from the Transformer with richer positional information. To verify this, we propose a segmentaware Transformer (Segatron), by replacing the original token position encoding with a combined position encoding of paragraph, sentence, and token. We first introduce the segment-aware mechanism to Transformer-XL, which is a popular Transformer-based language model with memory extension and relative position encoding. We find that our method can further improve the Transformer-XL base model and large model, achieving 17.1 perplexity on the WikiText103 dataset. We further investigate the pre-training masked language modeling task with Segatron. Experimental results show that BERT pre-trained with Segatron (SegaBERT) can outperform BERT with vanilla Transformer on various NLP tasks, and outperforms RoBERTa on zero-shot sentence representation learning. Our code is available on GitHub.

Cite

CITATION STYLE

APA

Bai, H., Shi, P., Lin, J., Xie, Y., Tan, L., Xiong, K., … Li, M. (2021). Segatron: Segment-Aware Transformer for Language Modeling and Understanding. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 14A, pp. 12526–12534). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i14.17485

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free