SMRT chatbots: Improving non-task-oriented dialog with simulated multiple reference training

3Citations
Citations of this article
74Readers
Mendeley users who have this article in their library.

Abstract

Non-task-oriented dialog models suffer from poor quality and non-diverse responses. To overcome limited conversational data, we apply Simulated Multiple Reference Training (SMRT; Khayrallah et al., 2020), and use a paraphraser to simulate multiple responses per training prompt. We find SMRT improves over a strong Transformer baseline as measured by human and automatic quality scores and lexical diversity. We also find SMRT is comparable to pretraining in human evaluation quality, and outperforms pretraining on automatic quality and lexical diversity, without requiring related-domain dialog data.

Cite

CITATION STYLE

APA

Khayrallah, H., & Sedoc, J. (2020). SMRT chatbots: Improving non-task-oriented dialog with simulated multiple reference training. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 4489–4505). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.403

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free