Co-attentive multi-task learning for explainable recommendation

119Citations
Citations of this article
164Readers
Mendeley users who have this article in their library.

Abstract

Despite widespread adoption, recommender systems remain mostly black boxes. Recently, providing explanations about why items are recommended has attracted increasing attention due to its capability to enhance user trust and satisfaction. In this paper, we propose a co-attentive multitask learning model for explainable recommendation. Our model improves both prediction accuracy and explainability of recommendation by fully exploiting the correlations between the recommendation task and the explanation task. In particular, we design an encoder-selector-decoder architecture inspired by human's information-processing model in cognitive psychology. We also propose a hierarchical co-attentive selector to effectively model the cross knowledge transferred for both tasks. Our model not only enhances prediction accuracy of the recommendation task, but also generates linguistic explanations that are fluent, useful, and highly personalized. Experiments on three public datasets demonstrate the effectiveness of our model.

Cite

CITATION STYLE

APA

Chen, Z., Wang, X., Xie, X., Wu, T., Bu, G., Wang, Y., & Chen, E. (2019). Co-attentive multi-task learning for explainable recommendation. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 2137–2143). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/296

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free