Knowledge graphs (KG) have become increasingly important to endow modern recommender systems with the ability to generate traceable reasoning paths to explain the recommendation process. However, prior research rarely considers the faithfulness of the derived explanations to justify the decision-making process. To the best of our knowledge, this is the first work that models and evaluates faithfully explainable recommendation under the framework of KG reasoning. Specifically, we propose neural logic reasoning for explainable recommendation (LOGER) by drawing on interpretable logical rules to guide the path-reasoning process for explanation generation. We experiment on three large-scale datasets in the e-commerce domain, demonstrating the effectiveness of our method in delivering high-quality recommendations as well as ascertaining the faithfulness of the derived explanation.
CITATION STYLE
Zhu, Y., Xian, Y., Fu, Z., de Melo, G., & Zhang, Y. (2021). Faithfully Explainable Recommendation via Neural Logic Reasoning. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 3083–3090). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-main.245
Mendeley helps you to discover research relevant for your work.