A Task-Oriented Dialogue Architecture via Transformer Neural Language Models and Symbolic Injection

2Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Recently, transformer language models have been applied to build both task- and non-taskoriented dialogue systems. Although transformers perform well on most of the NLP tasks, they perform poorly on context retrieval and symbolic reasoning. Our work aims to address this limitation by embedding the model in an operational loop that blends both natural language generation and symbolic injection. We evaluated our system on the multi-domain DSTC8 data set and reported joint goal accuracy of 75.8% (ranked among the first half positions), intent accuracy of 97.4% (which is higher than the reported literature), and a 15% improvement for success rate compared to a baseline with no symbolic injection. These promising results suggest that transformer language models can not only generate proper system responses but also symbolic representations that can further be used to enhance the overall quality of the dialogue management as well as serving as scaffolding for complex conversational reasoning.

Cite

CITATION STYLE

APA

Romero, O. J., Wang, A., Zimmerman, J., Steinfeld, A., & Tomasic, A. (2021). A Task-Oriented Dialogue Architecture via Transformer Neural Language Models and Symbolic Injection. In SIGDIAL 2021 - 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference (pp. 438–444). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.sigdial-1.46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free