Argumentation-based recommendations: Fantastic explanations and how to find them

67Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

A significant problem of recommender systems is their inability to explain recommendations, resulting in turn in ineffective feedback from users and the inability to adapt to users' preferences. We propose a hybrid method for calculating predicted ratings, built upon an item/aspect-based graph with users' partially given ratings, that can be naturally used to provide explanations for recommendations, extracted from user-tailored Tripolar Argumentation Frameworks (TFs). We show that our method can be understood as a gradual semantics for TFs, exhibiting a desirable, albeit weak, property of balance. We also show experimentally that our method is competitive in generating correct predictions, compared with state-of-the-art methods, and illustrate how users can interact with the generated explanations to improve quality of recommendations.

Cite

CITATION STYLE

APA

Rago, A., Cocarascu, O., & Toni, F. (2018). Argumentation-based recommendations: Fantastic explanations and how to find them. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 1949–1955). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/269

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free