Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence

11Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Biases in Artificial Intelligence (AI) systems or their results are one important issue that demands AI explainability. Despite the prevalence of AI applications, the general public are not necessarily equipped with the ability to understand how the black-box algorithms work and how to deal with biases. To inform designs for explainable AI (XAI), we conducted in-depth interviews with major stakeholders, both end-users (n = 24) and engineers (n = 15), to investigate how they made sense of AI applications and the associated biases according to situations of high and low stakes. We discussed users' perceptions and attributions about AI biases and their desired levels and types of explainability. We found that personal relevance and boundaries as well as the level of stake are two major dimensions for developing user trust especially during biased situations and informing XAI designs.

Cite

CITATION STYLE

APA

Yuan, C. W. T., Bi, N., Lin, Y. F., & Tseng, Y. H. (2023). Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3544548.3580945

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free