Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents

18Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We argue that disentangling content selection from the budget used to cover salient content improves the performance and applicability of abstractive summarizers. Our method, FACTORSUM, does this disentanglement by factorizing summarization into two steps through an energy function: (1) generation of abstractive summary views covering salient information in subsets of the input document (document views); (2) combination of these views into a final summary, following a budget and content guidance. This guidance may come from different sources, including from an advisor model such as BART or BigBird, or in oracle mode - from the reference. This factorization achieves significantly higher ROUGE scores on multiple benchmarks for long document summarization, namely PubMed, arXiv, and GovReport. Notably, our model is effective for domain adaptation. When trained only on PubMed, it achieves a 46.29 ROUGE-1 score on arXiv, outperforming PEGASUS trained in domain by a large margin. Our experimental results indicate that the performance gains are due to more flexible budget adaptation and processing of shorter contexts provided by partial document views.

Cite

CITATION STYLE

APA

Fonseca, M., Ziser, Y., & Cohen, S. B. (2022). Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 6341–6364). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.426

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free