Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users

32Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The increasing usage of complex Machine Learning models for decision-making has raised interest in explainable artificial intelligence (XAI). In this work, we focus on the effects of providing accessible and useful explanations to non-expert users. More specifically, we propose generic XAI design principles for contextualizing and allowing the exploration of explanations based on local feature importance. To evaluate the effectiveness of these principles for improving users' objective understanding and satisfaction, we conduct a controlled user study with 80 participants using 4 different versions of our XAI system, in the context of an insurance scenario. Our results show that the contextualization principles we propose significantly improve user's satisfaction and is close to have a significant impact on user's objective understanding. They also show that the exploration principles we propose improve user's satisfaction. On the other hand, the interaction of these principles does not appear to bring improvement on both dimensions of users' understanding.

Cite

CITATION STYLE

APA

Bove, C., Aigrain, J., Lesot, M. J., Tijus, C., & Detyniecki, M. (2022). Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users. In International Conference on Intelligent User Interfaces, Proceedings IUI (pp. 807–819). Association for Computing Machinery. https://doi.org/10.1145/3490099.3511139

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free