SCROLLS: Standardized CompaRison Over Long Language Sequences

67Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.
Get full text

Abstract

NLP benchmarks have largely focused on short texts, such as sentences and paragraphs, even though long texts comprise a considerable amount of natural language in the wild. We introduce SCROLLS, a suite of tasks that require reasoning over long texts. We examine existing long-text datasets, and handpick ones where the text is naturally long, while prioritizing tasks that involve synthesizing information across the input. SCROLLS contains summarization, question answering, and natural language inference tasks, covering multiple domains, including literature, science, business, and entertainment. Initial baselines, including Longformer Encoder-Decoder, indicate that there is ample room for improvement on SCROLLS. We make all datasets available in a unified text-to-text format and host a live leaderboard to facilitate research on model architecture and pretraining methods.

Cite

CITATION STYLE

APA

Shaham, U., Segal, E., Ivgi, M., Efrat, A., Yoran, O., Haviv, A., … Levy, O. (2022). SCROLLS: Standardized CompaRison Over Long Language Sequences. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 12007–12021). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.823

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free