Explainable AI methods in cyber risk management

42Citations
Citations of this article
107Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Artificial intelligence (AI) methods are becoming widespread, especially when data are not sufficient to build classical statistical models, as is the case for cyber risk management. However, when applied to regulated industries, such as energy, finance, and health, AI methods lack explainability. Authorities aimed at validating machine learning models in regulated fields will not consider black-box models, unless they are supplemented with further methods that explain why certain predictions have been obtained, and which are the variables that mostly concur to such predictions. Recently, Shapley values have been introduced for this purpose: They are model agnostic, and powerful, but are not normalized and, therefore, cannot become a standardized procedure. In this paper, we provide an explainable AI model that embeds Shapley values with a statistical normalization, based on Lorenz Zonoids, particularly suited for ordinal measurement variables that can be obtained to assess cyber risk.

References Powered by Scopus

Shapley-Lorenz eXplainable Artificial Intelligence

129Citations
N/AReaders
Get full text

Future developments in cyber risk assessment for the internet of things

123Citations
N/AReaders
Get full text

Development of a cyber security risk model using Bayesian networks

91Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Explainable AI for Healthcare 5.0: Opportunities and Challenges

194Citations
N/AReaders
Get full text

Explainable artificial intelligence for cybersecurity: a literature survey

43Citations
N/AReaders
Get full text

Artificial Intelligence risk measurement

41Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Giudici, P., & Raffinetti, E. (2022). Explainable AI methods in cyber risk management. Quality and Reliability Engineering International, 38(3), 1318–1326. https://doi.org/10.1002/qre.2939

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 15

50%

Professor / Associate Prof. 5

17%

Lecturer / Post doc 5

17%

Researcher 5

17%

Readers' Discipline

Tooltip

Computer Science 10

38%

Business, Management and Accounting 9

35%

Engineering 4

15%

Economics, Econometrics and Finance 3

12%

Article Metrics

Tooltip
Social Media
Shares, Likes & Comments: 11

Save time finding and organizing research with Mendeley

Sign up for free