Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems

13Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

Abstract

In today’s information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems’ increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user’s ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.

Cite

CITATION STYLE

APA

Govea, J., Gutierrez, R., & Villegas-Ch, W. (2024). Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems. Frontiers in Artificial Intelligence, 7. https://doi.org/10.3389/frai.2024.1410790

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free