Developing a Catalogue of Explainability Methods to Support Expert and Non-expert Users

6Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Organisations face growing legal requirements and ethical responsibilities to ensure that decisions made by their intelligent systems are explainable. However, provisioning of an explanation is often application dependent, causing an extended design phase and delayed deployment. In this paper we present an explainability framework formed of a catalogue of explanation methods, allowing integration to a range of projects within a telecommunications organisation. These methods are split into low-level explanations, high-level explanations and co-created explanations. We motivate and evaluate this framework using the specific case-study of explaining the conclusions of field engineering experts to non-technical planning staff. Feedback from an iterative co-creation process and a qualitative evaluation is indicative that this is a valuable development tool for use in future company projects.

Cite

CITATION STYLE

APA

Martin, K., Liret, A., Wiratunga, N., Owusu, G., & Kern, M. (2019). Developing a Catalogue of Explainability Methods to Support Expert and Non-expert Users. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11927 LNAI, pp. 309–324). Springer. https://doi.org/10.1007/978-3-030-34885-4_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free