A Survey on Methods and Metrics for the Assessment of Explainability under the Proposed AI Act

17Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

This study discusses the interplay between metrics used to measure the explainability of the AI systems and the proposed EU Artificial Intelligence Act. A standardisation process is ongoing: several entities (e.g. ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act and explainability metrics play a significant role. This study identifies the requirements that such a metric should possess to ease compliance with the AI Act. It does so according to an interdisciplinary approach, i.e. by departing from the philosophical concept of explainability and discussing some metrics proposed by scholars and standardisation entities through the lenses of the explainability obligations set by the proposed AI Act. Our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible & accessible. This is why we discuss the extent to which these requirements are met by the metrics currently under discussion.

Cite

CITATION STYLE

APA

Sovrano, F., Sapienza, S., Palmirani, M., & Vitali, F. (2021). A Survey on Methods and Metrics for the Assessment of Explainability under the Proposed AI Act. In Frontiers in Artificial Intelligence and Applications (Vol. 346, pp. 235–242). IOS Press BV. https://doi.org/10.3233/FAIA210342

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free