Generating Purpose-Driven Explanations: The Case of Process Predictive Model Inspection

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Explainable AI is an emerging branch of data science that focuses on demystifying the complex computation logic of machine learning with an aim to improve the transparency, validity and trust in automated decisions. While existing research focuses on building methods and techniques to explain ‘black-box’ models, much attention has not been paid to generating model explanations. Effective model explanations are often driven by the purpose of explanation in a given problem context. In this paper, we propose a framework to support generating model explanations for the purpose of model inspection in the context of predictive process analytics. We build a visual explanation platform as an implementation of the proposed framework for inspecting and analysing a process predictive model, and demonstrate the applicability of the framework using a real-life case study on a loan application process.

Cite

CITATION STYLE

APA

Wickramanayake, B., Ouyang, C., Moreira, C., & Xu, Y. (2022). Generating Purpose-Driven Explanations: The Case of Process Predictive Model Inspection. In Lecture Notes in Business Information Processing (Vol. 452, pp. 120–129). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-07481-3_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free