Evaluating Explainable AI (XAI) in Terms of User Gender and Educational Background

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificial intelligence (AI) and machine learning (ML) have become ubiquitous tools in the modern era. As more technologies become dependent on AI and ML, there is a greater push to understand how to use Explainable AI (XAI) to help human users understand the underlying decision processes of the technology. We report on a laboratory-controlled experiment focused on trust and understanding comparing different types of XAI explanations with a recommendation system while controlling for participant gender and educational background. We found statistically significant interactions of both gender and educational background. As a result, we conclude that there is not one particular way to show explanations that is best for all audiences because both gender and the person’s education and professional background play a part in determining their trust and understanding in the system. In addition, although a number of publications have promoted visual word clouds as a way to show XAI, our participants rejected that type of explanation and preferred textual explanations.

Cite

CITATION STYLE

APA

Reeder, S., Jensen, J., & Ball, R. (2023). Evaluating Explainable AI (XAI) in Terms of User Gender and Educational Background. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14050 LNAI, pp. 286–304). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-35891-3_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free