Abstract
In task-oriented dialog, agents need to generate both fluent natural language responses and correct external actions like database queries and updates. We show that methods that achieve state of the art performance on synthetic datasets, perform poorly in real world dialog tasks. We propose a hybrid model, where nearest neighbor is used to generate fluent responses and Sequence-to-Sequence (Seq2Seq) type models ensure dialogue coherency and generate accurate external actions. The hybrid model on an internal customer support dataset achieves a 78% relative improvement in fluency, and a 200% improvement in external call accuracy.
Cite
CITATION STYLE
Gangadharaiah, R., Narayanaswamy, B., & Elkan, C. (2018). What we need to learn if we want to do and not just talk. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 3, pp. 25–32). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-3004
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.