Adapting Pretrained Text-to-Text Models for Long Text Sequences

4Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline - model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying lengths. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora, which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes.

Cite

CITATION STYLE

APA

Xiong, W., Gupta, A., Toshniwal, S., Mehdad, Y., & Yih, W. T. (2023). Adapting Pretrained Text-to-Text Models for Long Text Sequences. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 5566–5578). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.370

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free