Controllable Abstractive Dialogue Summarization with Sketch Supervision

30Citations
Citations of this article
89Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we aim to improve abstractive dialogue summarization quality and, at the same time, enable granularity control. Our model has two primary components and stages: 1) a two-stage generation strategy that generates a preliminary summary sketch serving as the basis for the final summary. This summary sketch provides a weakly supervised signal in the form of pseudo-labeled interrogative pronoun categories and key phrases extracted using a constituency parser. 2) A simple strategy to control the granularity of the final summary, in that our model can automatically determine or control the number of generated summary sentences for a given dialogue by predicting and highlighting different text spans from the source text. Our model achieves state-of-the-art performance on the largest dialogue summarization corpus SAMSum, with as high as 50.79 in ROUGE-L score. In addition, we conduct a case study and show competitive human evaluation results and controllability to human-annotated summaries.

Cite

CITATION STYLE

APA

Wu, C. S., Liu, L., Liu, W., Stenetorp, P., & Xiong, C. (2021). Controllable Abstractive Dialogue Summarization with Sketch Supervision. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 5108–5122). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.454

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free