On the Intersection of Explainable and Reliable AI for Physical Fatigue Prediction

10Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In the era of Industry 4.0, the use of Artificial Intelligence (AI) is widespread in occupational settings. Since dealing with human safety, explainability and trustworthiness of AI are even more important than achieving high accuracy. eXplainable AI (XAI) is investigated in this paper to detect physical fatigue during manual material handling task simulation. Besides comparing global rule-based XAI models (LLM and DT) to black-box models (NN, SVM, XGBoost) in terms of performance, we also compare global models with local ones (LIME over XGBoost). Surprisingly, global and local approaches achieve similar conclusions, in terms of feature importance. Moreover, an expansion from local rules to global rules is designed for Anchors, by posing an appropriate optimization method (Anchors coverage is enlarged from an original low value, 11%, up to 43%). As far as trustworthiness is concerned, rule sensitivity analysis drives the identification of optimized regions in the feature space, where physical fatigue is predicted with zero statistical error. The discovery of such 'non-fatigue regions' helps certifying the organizational and clinical decision making.

Cite

CITATION STYLE

APA

Narteni, S., Orani, V., Cambiaso, E., Rucco, M., & Mongelli, M. (2022). On the Intersection of Explainable and Reliable AI for Physical Fatigue Prediction. IEEE Access, 10, 76243–76260. https://doi.org/10.1109/ACCESS.2022.3191907

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free