ExpScore: Learning Metrics for Recommendation Explanation

5Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Many information access and machine learning systems, including recommender systems, lack transparency and accountability. High-quality recommendation explanations are of great significance to enhance the transparency and interpretability of such systems. However, evaluating the quality of recommendation explanations is still challenging due to the lack of human-annotated data and benchmarks. In this paper, we present a large explanation dataset named RecoExp, which contains thousands of crowdsourced ratings of perceived quality in explaining recommendations. To measure explainability in a comprehensive and interpretable manner, we propose ExpScore, a novel machine learning-based metric that incorporates the definition of explainability from various perspectives (e.g., relevance, readability, subjectivity, and sentiment polarity). Experiments demonstrate that ExpScore not only vastly outperforms existing metrics and but also keeps itself explainable. Both the RecoExp dataset and open-source implementation of ExpScore will be released for the whole community. These resources and our findings can serve as forces of public good for scholars as well as recommender systems users.

Cite

CITATION STYLE

APA

Wen, B., Feng, Y., Zhang, Y., & Shah, C. (2022). ExpScore: Learning Metrics for Recommendation Explanation. In WWW 2022 - Proceedings of the ACM Web Conference 2022 (pp. 3740–3744). Association for Computing Machinery, Inc. https://doi.org/10.1145/3485447.3512269

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free