Self-learning architecture for natural language generation

3Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose a self-learning architecture for generating natural language templates for conversational assistants. Generating templates to cover all the combinations of slots in an intent is time consuming and labor-intensive. We examine three different models based on our proposed architecture - Rule-based model, Sequence-to-Sequence (Seq2Seq) model and Semantically Conditioned LSTM (SC-LSTM) model for the IoT domain - to reduce the human labor required for template generation. We demonstrate the feasibility of template generation for the IoT domain using our self-learning architecture. In both automatic and human evaluation, the self-learning architecture outperforms previous works trained with a fully human-labeled dataset. This is promising for commercial conversational assistant solutions.

Cite

CITATION STYLE

APA

Choi, H., Siddarth, K. M., Yang, H., Jeon, H., Hwang, I., & Kim, J. (2018). Self-learning architecture for natural language generation. In INLG 2018 - 11th International Natural Language Generation Conference, Proceedings of the Conference (pp. 165–170). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-6520

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free