Generating relevant responses in a dialog is challenging, and requires not only proper modeling of context in the conversation, but also being able to generate fluent sentences during inference. In this paper, we propose a two-step framework based on generative adversarial nets for generating conditioned responses. Our model first learns a meaningful representation of sentences by autoencoding, and then learns to map an input query to the response representation, which is in turn decoded as a response sentence. Both quantitative and qualitative evaluations show that our model generates more fluent, relevant, and diverse responses than existing state-of-the-art methods.
CITATION STYLE
Khan, K., Sahu, G., Balasubramanian, V., Mou, L., & Vechtomova, O. (2020). Adversarial Learning on the Latent Space for Diverse Dialog Generation. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 5026–5034). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.441
Mendeley helps you to discover research relevant for your work.