Personalized response generation for customer service agents

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Natural language generation is a critical component of dialogue system and plenty of works have proved the effectiveness and efficiency of sequence-to-sequence (seq2seq) model for generation. Seq2seq model is a kind of neural networks which usual require massive data to learn its parameters. For many small shops in customer service dialogue systems, there is not large dialogue dataset to be utilized to train this model, resulting in performance of trained model cannot meet real application requirements. In this work, we present the Tensor Encoder Generative Model (TEGM) collaborating data of many shops in customer service dialogue system, and expect to alleviate the disadvantage of data insufficiency. The generator fully trained from data can be capable of encoding personalized feature of each shop. Experimental results show that the TEGM indeed can improve performance compared to baseline.

Cite

CITATION STYLE

APA

Cuihua, M., Guo, P., & Xin, X. (2018). Personalized response generation for customer service agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10878 LNCS, pp. 476–483). Springer Verlag. https://doi.org/10.1007/978-3-319-92537-0_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free