UNISUMM and SUMMZOO: Unified Model and Diverse Benchmark for Few-Shot Summarization

10Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

The high annotation costs and diverse demands of various summarization tasks motivate the development of few-shot summarization. However, despite the emergence of many summarization tasks and datasets, the current training paradigm for few-shot summarization systems ignores potentially shareable knowledge in heterogeneous datasets. To this end, we propose UNISUMM, a unified few-shot summarization model pre-trained with multiple summarization tasks and can be prefix-tuned to excel at any few-shot summarization task. Meanwhile, to better evaluate few-shot summarizers, under the principles of diversity and robustness, we assemble and release a new benchmark SUMMZOO. It consists of 8 summarization tasks with multiple sets of few-shot samples for each task, covering diverse domains. Experimental results and analysis show that UNISUMM outperforms strong baselines by a large margin across all sub-tasks in SUMMZOO under both automatic and human evaluations and achieves comparable results in human evaluation compared with a GPT-3.5 model.

Cite

CITATION STYLE

APA

Chen, Y., Liu, Y., Xu, R., Yang, Z., Zhu, C., Zeng, M., & Zhang, Y. (2023). UNISUMM and SUMMZOO: Unified Model and Diverse Benchmark for Few-Shot Summarization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 12833–12855). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.718

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free