In domains which require high risk and high consequence decision making, such as defence and security, there is a clear requirement for artificial intelligence (AI) systems to be able to explain their reasoning. In this paper we examine what it means to provide explainable AI. We report on research findings to propose that explanations should be tailored, depending upon the role of the human interacting with the system and the individual system components, to reflect different needs. We demonstrate that a ‘one-size-fits-all’ explanation is insufficient to capture the complexity of needs. Thus, designing explainable AI systems involves careful consideration of context, and within that the nature of both the human and AI components.
CITATION STYLE
Hepenstal, S., & McNeish, D. (2020). Explainable Artificial Intelligence: What Do You Need to Know? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12196 LNAI, pp. 266–275). Springer. https://doi.org/10.1007/978-3-030-50353-6_20
Mendeley helps you to discover research relevant for your work.