Capturing Speaker Incorrectness: Speaker-Focused Post-Correction for Abstractive Dialogue Summarization

5Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we focus on improving the quality of the summary generated by neural abstractive dialogue summarization systems. Even though pre-trained language models generate well-constructed and promising results, it is still challenging to summarize the conversation of multiple participants since the summary should include a description of the overall situation and the actions of each speaker. This paper proposes self-supervised strategies for speaker-focused post-correction in abstractive dialogue summarization. Specifically, our model first discriminates which type of speaker correction is required in a draft summary and then generates a revised summary according to the required type. Experimental results show that our proposed method adequately corrects the draft summaries, and the revised summaries are significantly improved in both quantitative and qualitative evaluations.

Cite

CITATION STYLE

APA

Lee, D., Lim, J., Whang, T., Lee, C., Cho, S., Pak, M., & Lim, H. (2021). Capturing Speaker Incorrectness: Speaker-Focused Post-Correction for Abstractive Dialogue Summarization. In 3rd Workshop on New Frontiers in Summarization, NewSum 2021 - Workshop Proceedings (pp. 65–73). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.newsum-1.8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free