Explainability in predictive process monitoring: When understanding helps improving

31Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Predictive business process monitoring techniques aim at making predictions about the future state of the executions of a business process, as for instance the remaining execution time, the next activity that will be executed, or the final outcome with respect to a set of possible outcomes. However, in general, the accuracy of a predictive model is not optimal so that, in some cases, the predictions provided by the model are wrong. In addition, state-of-the-art techniques for predictive process monitoring do not give an explanation about what features induced the predictive model to provide wrong predictions, so that it is difficult to understand why the predictive model was mistaken. In this paper, we propose a novel approach to explain why a predictive model for outcome-oriented predictions provides wrong predictions, and eventually improve its accuracy. The approach leverages post-hoc explainers and different encodings for identifying the most common features that induce a predictor to make mistakes. By reducing the impact of those features, the accuracy of the predictive model is increased. The approach has been validated on both synthetic and real-life logs.

Cite

CITATION STYLE

APA

Rizzi, W., Di Francescomarino, C., & Maggi, F. M. (2020). Explainability in predictive process monitoring: When understanding helps improving. In Lecture Notes in Business Information Processing (Vol. 392 LNBIP, pp. 141–158). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58638-6_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free