Personalized Response Generation with Tensor Factorization

1Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Personalized response generation is essential for more human-like conversations. However, how to model user personalization information with no explicit user persona descriptions or demographics still remains under-investigated. To tackle the data sparsity problem and the huge number of users, we utilize tensor factorization to model users’ personalization information with their posting histories. Specifically, we introduce the personalized response embedding for all question-user pairs and form them into a three-mode tensor, decomposed by Tucker decomposition. The personalized response embedding is fed to either the decoder of an LSTM-based Seq2Seq model or a transformer language model to help generate more personalized responses. To evaluate how personalized the generated responses are, we further propose a novel ranking-based metric called Per-Hits@k which measures how likely are the generated responses come from the corresponding users. Results on a large-scale English conversation dataset show that our proposed tensor factorization based models generate more personalized and higher quality responses compared to baselines. We have publicly released our code at https://github.com/GT-SALT/ personalized_response_generation.

Cite

CITATION STYLE

APA

Wang, Z., Luo, L., & Yang, D. (2021). Personalized Response Generation with Tensor Factorization. In GEM 2021 - 1st Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings (pp. 47–57). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.gem-1.5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free