A Situation Awareness-Based Framework for Design and Evaluation of Explainable AI

49Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems to be understandable to human users. The explainable AI (XAI) literature aims to enhance human understanding and human-AI team performance by providing users with necessary information about AI system behavior. Simultaneously, the human factors literature has long addressed important considerations that contribute to human performance, including how to determine human informational needs. Drawing from the human factors literature, we propose a three-level framework for the development and evaluation of explanations about AI system behavior. Our proposed levels of XAI are based on the informational needs of human users, which can be determined using the levels of situation awareness (SA) framework from the human factors literature. Based on our levels of XAI framework, we also propose a method for assessing the effectiveness of XAI systems.

Cite

CITATION STYLE

APA

Sanneman, L., & Shah, J. A. (2020). A Situation Awareness-Based Framework for Design and Evaluation of Explainable AI. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12175 LNAI, pp. 94–110). Springer. https://doi.org/10.1007/978-3-030-51924-7_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free