Incorporating joint embeddings into goal-oriented dialogues with multi-task learning

0Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Attention-based encoder-decoder neural network models have recently shown promising results in goal-oriented dialogue systems. However, these models struggle to reason over and incorporate state-full knowledge while preserving their end-to-end text generation functionality. Since such models can greatly benefit from user intent and knowledge graph integration, in this paper we propose an RNN-based end-to-end encoder-decoder architecture which is trained with joint embeddings of the knowledge graph and the corpus as input. The model provides an additional integration of user intent along with text generation, trained with multi-task learning paradigm along with an additional regularization technique to penalize generating the wrong entity as output. The model further incorporates a Knowledge Graph entity lookup during inference to guarantee the generated output is state-full based on the local knowledge graph provided. We finally evaluated the model using the BLEU score, empirical evaluation depicts that our proposed architecture can aid in the betterment of task-oriented dialogue system’s performance.

Cite

CITATION STYLE

APA

Kassawat, F., Chaudhuri, D., & Lehmann, J. (2019). Incorporating joint embeddings into goal-oriented dialogues with multi-task learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11503 LNCS, pp. 225–239). Springer Verlag. https://doi.org/10.1007/978-3-030-21348-0_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free