Personalized Review-Oriented Explanations for Recommender Systems

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Explainable recommender systems aim to provide clear interpretations to a user regarding the recommended list of items. The explanations present different formats to justify the recommended list of items such as images, graphs or text. We propose to use review-oriented explanations to help users in their decision since we can find crucial detailed feature in the reviews given by users. The model uses advances of natural language processing and incorporates the helpfulness score given in previous reviews to explain the recommended list of items provided by a latent factor model prediction. We conducted empirical experiments in the Yelp and Amazon datasets, proving that our model improves the quality of the explanations. The model outperforms baselines models by for NDCG@5, for HitRatio@5, for NDCG@10, and for HitRatio@10 in the Yelp dataset. For the Amazon dataset, the observed improvement was for NDCG@5, for HitRatio@5, for NDCG@10, and for HitRatio@10.

Cite

CITATION STYLE

APA

Costa, F., & Dolog, P. (2019). Personalized Review-Oriented Explanations for Recommender Systems. In Lecture Notes in Business Information Processing (Vol. 372 LNBIP, pp. 147–169). Springer. https://doi.org/10.1007/978-3-030-35330-8_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free