Abstract
Interpretable explanations for recommender systems and other machine learning models are crucial to gain user trust. Prior works that have focused on paths connecting users and items in a heterogeneous network have several limitations, such as discovering relationships rather than true explanations, or disregarding other users’ privacy. In this work, we take a fresh perspective, and present Prince: a provider-side mechanism to produce tangible explanations for end-users, where an explanation is defined to be a set of minimal actions performed by the user that, if removed, changes the recommendation to a different item. Given a recommendation, Prince uses a polynomial-time optimal algorithm for finding this minimal set of a user’s actions from an exponential search space, based on random walks over dynamic graphs. Experiments on two real-world datasets show that Prince provides more compact explanations than intuitive baselines, and insights from a crowdsourced user-study demonstrate the viability of such action-based explanations. We thus posit that Prince produces scrutable, actionable, and concise explanations, owing to its use of counterfactual evidence, a user’s own actions, and minimal sets, respectively.
Cite
CITATION STYLE
Ghazimatin, A., Balalau, O., Roy, R. S., & Weikum, G. (2020). Prince: Provider-side interpretability with counterfactual explanations in recommender systems. In WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining (pp. 196–204). Association for Computing Machinery, Inc. https://doi.org/10.1145/3336191.3371824
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.