TSDG: Content-aware neural response generation with two-stage decoding process

3Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

Neural response generative models have achieved remarkable progress in recent years but tend to yield irrelevant and uninformative responses. One of the reasons is that encoder-decoder based models always use a single decoder to generate a complete response at a stroke. This tends to generate high-frequency function words with less semantic information rather than low-frequency content words with more semantic information. To address this issue, we propose a content-aware model with two-stage decoding process named Two-stage Dialogue Generation (TSDG). We separate the decoding process of content words and function words so that content words can be generated independently without the interference of function words. Experimental results on two datasets indicate that our model significantly outperforms several competitive generative models in terms of automatic evaluation and human evaluation.

Cite

CITATION STYLE

APA

Kong, J., Zhong, Z., Cai, Y., Wu, X., & Ren, D. (2020). TSDG: Content-aware neural response generation with two-stage decoding process. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 2121–2126). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.192

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free