Explainable Artificial Intelligence: What Do You Need to Know?

11Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In domains which require high risk and high consequence decision making, such as defence and security, there is a clear requirement for artificial intelligence (AI) systems to be able to explain their reasoning. In this paper we examine what it means to provide explainable AI. We report on research findings to propose that explanations should be tailored, depending upon the role of the human interacting with the system and the individual system components, to reflect different needs. We demonstrate that a ‘one-size-fits-all’ explanation is insufficient to capture the complexity of needs. Thus, designing explainable AI systems involves careful consideration of context, and within that the nature of both the human and AI components.

Cite

CITATION STYLE

APA

Hepenstal, S., & McNeish, D. (2020). Explainable Artificial Intelligence: What Do You Need to Know? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12196 LNAI, pp. 266–275). Springer. https://doi.org/10.1007/978-3-030-50353-6_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free