PLATO: Pre-trained dialogue generation model with discrete latent variable

168Citations
Citations of this article
353Readers
Mendeley users who have this article in their library.

Abstract

Pre-training models have been proved effective for a wide range of natural language processing tasks. Inspired by this, we propose a novel dialogue generation pre-training framework to support various kinds of conversations, including chit-chat, knowledge grounded dialogues, and conversational question answering. In this framework, we adopt flexible attention mechanisms to fully leverage the bi-directional context and the uni-directional characteristic of language generation. We also introduce discrete latent variables to tackle the inherent one-to-many mapping problem in response generation. Two reciprocal tasks of response generation and latent act recognition are designed and carried out simultaneously within a shared network. Comprehensive experiments on three publicly available datasets verify the effectiveness and superiority of the proposed framework.

Cite

CITATION STYLE

APA

Bao, S., He, H., Wang, F., Wu, H., & Wang, H. (2020). PLATO: Pre-trained dialogue generation model with discrete latent variable. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 85–96). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free