Explainable Predictive Maintenance is Not Enough: Quantifying Trust in Remaining Useful Life Estimation

5Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Machine learning (ML)/deep learning (DL) has shown tremendous success in data-driven predictive maintenance (PdM). However, operators and technicians often require insights to understand what is happening, why it is happening, and how to react, which these black-box models cannot provide. This is a major obstacle in adopting PdM as it cannot support experts in making maintenance decisions based on the problems it detects. Motivated by this, several researchers have recently utilized various post-hoc explanation methods and tools, such as LIME, SHAP, etc., for explaining the predicted RUL from these black-box models. Unfortunately, such (post-hoc) explanation methods often suffer from the disagreement problem, which occurs when multiple explainable AI (XAI) tools differ in their feature ranking. Hence, explainable PdM models that rely on these methods are not trustworthy, as such unstable explanations may lead to catastrophic consequences in safety-critical PdM applications. This paper proposes a novel framework to address this problem. Specifically, first, we utilize three state-of-the-art explanation methods: LIME, SHAP, and Anchor, to explain the predicted RUL from three ML-based PdM models, namely extreme gradient boosting (XGB), random forest (RF), logistic regression (LR), and one feed-forward neural network (FFNN)-based PdM model using the C-MAPSS dataset. We show that the ranking of dominant features for RUL prediction differs for different explanation methods. Then, we propose a new metric trust score for selecting the proper explanation method. This is achieved by evaluating the XAI methods using four evaluation metrics: fidelity, stability, consistency, and identity, and then combining them into a single trust score metric through utilizing Kenny and Borda rank aggregation methods. Our results show that the proposed method effectively selects the most appropriate explanation method from a set of explanation methods for estimated RULs. To the best of our knowledge, this is the first work that attempts to address and solve the disagreement problem in explainable PdM.

Cite

CITATION STYLE

APA

Kundu, R. K., & Hoque, K. A. (2023). Explainable Predictive Maintenance is Not Enough: Quantifying Trust in Remaining Useful Life Estimation. In Proceedings of the Annual Conference of the Prognostics and Health Management Society, PHM (Vol. 15). Prognostics and Health Management Society. https://doi.org/10.36001/phmconf.2023.v15i1.3472

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free