On the Suitability of SHAP Explanations for Refining Classifications

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In industrial contexts, when an ML model classifies a sample as positive, it raises an alarm, which is subsequently sent to human analysts for verification. Reducing the number of false alarms upstream in an ML pipeline is paramount to reduce the workload of experts while increasing customers’ trust. Increasingly, SHAP Explanations are leveraged to facilitate manual analysis. Because they have been shown to be useful to human analysts in the detection of false positives, we postulate that SHAP Explanations may provide a means to automate false-positive reduction. To confirm our intuition, we evaluate clustering and rules detection metrics with ground truth labels to understand the utility of SHAP Explanations to discriminate false positives from true positives. We show that SHAP Explanations are indeed relevant in discriminating samples and are a relevant candidate to automate ML tasks and help to detect and reduce false-positive results.

Cite

CITATION STYLE

APA

Arslan, Y., Lebichot, B., Allix, K., Veiber, L., Lefebvre, C., Boytsov, A., … Klein, J. (2022). On the Suitability of SHAP Explanations for Refining Classifications. In International Conference on Agents and Artificial Intelligence (Vol. 3, pp. 395–402). Science and Technology Publications, Lda. https://doi.org/10.5220/0010827700003116

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free