Explainable recommendation via interpretable feature mapping and evaluation of explainability

N/ACitations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

Latent factor collaborative filtering (CF) has been a widely used technique for recommender system by learning the semantic representations of users and items. Recently, explainable recommendation has attracted much attention from research community. However, trade-off exists between explainability and performance of the recommendation where metadata is often needed to alleviate the dilemma. We present a novel feature mapping approach that maps the uninterpretable general features onto the interpretable aspect features, achieving both satisfactory accuracy and explainability in the recommendations by simultaneous minimization of rating prediction loss and interpretation loss. To evaluate the explainability, we propose two new evaluation metrics specifically designed for aspect-level explanation using surrogate ground truth. Experimental results demonstrate a strong performance in both recommendation and explaining explanation, eliminating the need for metadata. Code is available from https://github.com/pd90506/AMCF.

Cite

CITATION STYLE

APA

Pan, D., Li, X., Li, X., & Zhu, D. (2020). Explainable recommendation via interpretable feature mapping and evaluation of explainability. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2690–2696). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/373

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free