Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization

11Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent pre-trained language models (PLMs) achieve promising results in existing abstractive summarization datasets. However, existing summarization benchmarks overlap in time with the standard pre-training corpora and finetuning datasets. Hence, the strong performance of PLMs may rely on the parametric knowledge that is memorized during pre-training and fine-tuning. Moreover, the knowledge memorized by PLMs may quickly become outdated, which affects the generalization performance of PLMs on future data. In this work, we propose TEMPOSUM, a novel benchmark that contains data samples from 2010 to 2022, to understand the temporal generalization ability of abstractive summarization models. Through extensive human evaluation, we show that parametric knowledge stored in summarization models significantly affects the faithfulness of the generated summaries on future data. Moreover, existing faithfulness enhancement methods cannot reliably improve the faithfulness of summarization models on future data. Finally, we discuss several recommendations to the research community on how to evaluate and improve the temporal generalization capability of text summarization models.

Cite

CITATION STYLE

APA

Cheang, C. S., Chan, H. P., Wong, D. F., Liu, X., Li, Z., Sun, Y., … Chao, L. S. (2023). Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 16205–16217). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.1007

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free