Narrate Dialogues for Better Summarization

4Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Dialogue summarization models aim to generate a concise and accurate summary for multiparty dialogue. The complexity of dialogue, including coreference, dialogue acts, and inter-speaker interactions bring unique challenges to dialogue summarization. Most recent neural models achieve state-of-art performance following the pretrain-then-finetune recipe, where the large-scale language model (LLM) is pretrained on large-scale single-speaker written text, but later finetuned on multi-speaker dialogue text. To mitigate the gap between pretraining and finetuning, we propose several approaches to convert the dialogue into a third-person narrative style and show that the narration serves as a valuable annotation for LLMs. Empirical results on three benchmark datasets show our simple approach achieves higher scores on the ROUGE and a factual correctness metric.

Cite

CITATION STYLE

APA

Xu, R., Zhu, C., & Zeng, M. (2022). Narrate Dialogues for Better Summarization. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 3565–3575). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.261

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free