Unsupervised Summarization Re-ranking

2Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the rise of task-specific pre-training objectives, abstractive summarization models like PEGASUS offer appealing zero-shot performance on downstream summarization tasks. However, the performance of such unsupervised models still lags significantly behind their supervised counterparts. Similarly to the supervised setup, we notice a very high variance in quality among summary candidates from these models while only one candidate is kept as the summary output. In this paper, we propose to re-rank summary candidates in an unsupervised manner, aiming to close the performance gap between unsupervised and supervised models. Our approach improves the unsupervised PEGASUS by up to 7.27% and ChatGPT by up to 6.86% relative mean ROUGE across four widely-adopted summarization benchmarks; and achieves relative gains of 7.51% (up to 23.73% from XSum to WikiHow) averaged over 30 zero-shot transfer setups (finetuning on a dataset, evaluating on another).

Cite

CITATION STYLE

APA

Ravaut, M., Joty, S., & Chen, N. F. (2023). Unsupervised Summarization Re-ranking. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 8341–8376). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.529

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free