Towards an XAI-Assisted Third-Party Evaluation of AI Systems: Illustration on Decision Trees

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We explored the potential contribution of eXplainable Artificial Intelligence (XAI) for the evaluation of Artificial Intelligence (AI), in a context where such an evaluation is performed by independent third-party evaluators, for example in the objective of certification. The experimental approach of this paper is based on “explainable by design” decision trees that produce predictions on health data and bank data. Results presented in this paper show that the explanations could be used by the evaluators to identify the parameters used in decision making and their levels of importance. The explanations would thus make it possible to orient the constitution of the evaluation corpus, to explore the rules followed for decision-making and to identify potentially critical relationships between different parameters. In addition, the explanations make it possible to inspect the presence of bias in the database and in the algorithm. These first results lay the groundwork for further additional research in order to generalize the conclusions of this paper to different XAI methods.

Cite

CITATION STYLE

APA

Zhou, Y., Boussard, M., & Delaborde, A. (2021). Towards an XAI-Assisted Third-Party Evaluation of AI Systems: Illustration on Decision Trees. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12688 LNAI, pp. 158–172). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-82017-6_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free