Enhancing Abstractiveness of Summarization Models through Calibrated Distillation

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Sequence-level knowledge distillation reduces the size of Seq2Seq models for more efficient abstractive summarization. However, it often leads to a loss of abstractiveness in summarization. In this paper, we propose a novel approach named DisCal to enhance the level of abstractiveness (measured by n-gram overlap) without sacrificing the informativeness (measured by ROUGE) of generated summaries. DisCal exposes diverse pseudo summaries with two supervision to the student model. Firstly, the best pseudo summary is identified in terms of abstractiveness and informativeness and used for sequence-level distillation. Secondly, their ranks are used to ensure the student model to assign higher prediction scores to summaries with higher ranks. Our experiments show that DisCal outperforms prior methods in abstractive summarization distillation, producing highly abstractive and informative summaries. Code is publicly available at https://c1kj.short.gy/discal.

Cite

CITATION STYLE

APA

Song, H., Shalyminov, I., Su, H., Singh, S., Yao, K., & Mansour, S. (2023). Enhancing Abstractiveness of Summarization Models through Calibrated Distillation. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 7026–7036). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.468

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free