Explainable AI methods in cyber risk management

42Citations
Citations of this article
105Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Artificial intelligence (AI) methods are becoming widespread, especially when data are not sufficient to build classical statistical models, as is the case for cyber risk management. However, when applied to regulated industries, such as energy, finance, and health, AI methods lack explainability. Authorities aimed at validating machine learning models in regulated fields will not consider black-box models, unless they are supplemented with further methods that explain why certain predictions have been obtained, and which are the variables that mostly concur to such predictions. Recently, Shapley values have been introduced for this purpose: They are model agnostic, and powerful, but are not normalized and, therefore, cannot become a standardized procedure. In this paper, we provide an explainable AI model that embeds Shapley values with a statistical normalization, based on Lorenz Zonoids, particularly suited for ordinal measurement variables that can be obtained to assess cyber risk.

Cite

CITATION STYLE

APA

Giudici, P., & Raffinetti, E. (2022). Explainable AI methods in cyber risk management. Quality and Reliability Engineering International, 38(3), 1318–1326. https://doi.org/10.1002/qre.2939

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free