Topic-Informed Dialogue Summarization using Topic Distribution and Prompt-based Modeling

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Dealing with multiple topics should be considered an important issue in dialogue summarization, because dialogues, unlike documents, are prone to topic drift. Thus, we propose a new dialogue summarization model that reflects dialogue topic distribution to consider all topics present in the dialogue. First, the distribution of dialogue topics is estimated by an effective topic discovery model. Then topic-informed prompt transfers estimated topic distribution information to the output of encoder and decoder vectors. Finally, the topic extractor estimates the summary topic distribution from the output context vector of decoder to distinguish its difference from the dialogue topic distribution. To consider the proportion of each topic distribution appeared in the dialogue, the extractor is trained to reduce the difference between the distributions of the dialogue and the summary. The experimental results on SAMSum and DialogSum show that our model outperforms state-of-the-art methods on ROUGE scores. The human evaluation results also show that our framework well generates comprehensive summaries.

Cite

CITATION STYLE

APA

You, J., & Ko, Y. (2023). Topic-Informed Dialogue Summarization using Topic Distribution and Prompt-based Modeling. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 5657–5663). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.376

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free