Boosting dialog response generation

25Citations
Citations of this article
166Readers
Mendeley users who have this article in their library.

Abstract

Neural models have become one of the most important approaches to dialog response generation. However, they still tend to generate the most common and generic responses in the corpus all the time. To address this problem, we designed an iterative training process and ensemble method based on boosting. We combined our method with different training and decoding paradigms as the base model, including mutual-information-based decoding and reward-augmented maximum likelihood learning. Empirical results show that our approach can significantly improve the diversity and relevance of the responses generated by all base models, backed by objective measurements and human evaluation.

Cite

CITATION STYLE

APA

Du, W., & Black, A. W. (2020). Boosting dialog response generation. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 38–43). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1005

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free