A tree-to-sequence model for neural NLG in task-oriented dialog

14Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

Generating fluent natural language responses from structured semantic representations is a critical step in task-oriented conversational systems. Sequence-to-sequence models on flat meaning representations (MR) have been dominant in this task, for example in the E2E NLG Challenge. Previous work has shown that a tree-structured MR can improve the model for better discourse-level structuring and sentence-level planning. In this work, we propose a tree-to-sequence model that uses a tree-LSTM encoder to leverage the tree structures in the input MR, and further enhance the decoding by a structure-enhanced attention mechanism. In addition, we explore combining these enhancements with constrained decoding to improve semantic correctness. Our method not only shows significant improvements over standard seq2seq baselines, but also is more data-efficient and generalizes better to hard scenarios.

Cite

CITATION STYLE

APA

Rao, J., Upasani, K., Balakrishnan, A., White, M., Kumar, A., & Subba, R. (2019). A tree-to-sequence model for neural NLG in task-oriented dialog. In INLG 2019 - 12th International Conference on Natural Language Generation, Proceedings of the Conference (pp. 95–100). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-8611

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free