SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization

54Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.

Abstract

Sequence-to-sequence neural networks have recently achieved great success in abstractive summarization, especially through fine-tuning large pre-trained language models on the downstream dataset. These models are typically decoded with beam search to generate a unique summary. However, the search space is very large, and with the exposure bias, such decoding is not optimal. In this paper, we show that it is possible to directly train a second-stage model performing re-ranking on a set of summary candidates. Our mixture-of-experts SummaReranker learns to select a better candidate and consistently improves the performance of the base model. With a base PEGASUS, we push ROUGE scores by 5.44% on CNN-DailyMail (47.16 ROUGE-1), 1.31% on XSum (48.12 ROUGE-1) and 9.34% on Reddit TIFU (29.83 ROUGE-1), reaching a new state-of-the-art. Our code and checkpoints will be available at https://github.com/ntunlp/SummaReranker.

Cite

CITATION STYLE

APA

Ravaut, M., Joty, S., & Chen, N. F. (2022). SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 4504–4524). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.309

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free