In natural language generation (NLG), the task is to generate utterances from a more abstract input, such as structured data. An added challenge is to generate utterances that contain an accurate representation of the input, while reflecting the fluency and variety of human-generated text. In this paper, we report experiments with NLG models that can be used in task oriented dialogue systems. We explore the use of additional input to the model to encourage diversity and control of outputs. While our submission does not rank highly using automated metrics, qualitative investigation of generated utterances suggests the use of additional information in neural network NLG systems to be a promising research direction.
CITATION STYLE
Elder, H., Gehrmann, S., O’Connor, A., & Liu, Q. (2018). E2E NLG challenge submission: Towards controllable generation of diverse natural language. In INLG 2018 - 11th International Natural Language Generation Conference, Proceedings of the Conference (pp. 457–462). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-6556
Mendeley helps you to discover research relevant for your work.