An Exploratory Study on Long Dialogue Summarization: What Works and What's Next

31Citations
Citations of this article
94Readers
Mendeley users who have this article in their library.

Abstract

Dialogue summarization helps readers capture salient information from long conversations in meetings, interviews, and TV series. However, real-world dialogues pose a great challenge to current summarization models, as the dialogue length typically exceeds the input limits imposed by recent transformer-based pretrained models, and the interactive nature of dialogues makes relevant information more context-dependent and sparsely distributed than news articles. In this work, we perform a comprehensive study on long dialogue summarization by investigating three strategies to deal with the lengthy input problem and locate relevant information: (1) extended transformer models such as Longformer, (2) retrieve-then-summarize pipeline models with several dialogue utterance retrieval methods, and (3) hierarchical dialogue encoding models such as HMNet. Our experimental results on three long dialogue datasets (QMSum, MediaSum, SummScreen) show that the retrievethen- summarize pipeline models yield the best performance. We also demonstrate that the summary quality can be further improved with a stronger retrieval model and pretraining on proper external summarization datasets.

Cite

CITATION STYLE

APA

Zhang, Y., Ni, A., Yu, T., Zhang, R., Zhu, C., Deb, B., … Radev, D. (2021). An Exploratory Study on Long Dialogue Summarization: What Works and What’s Next. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 4426–4433). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.377

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free