An Audit Framework for Technical Assessment of Binary Classifiers

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multilevel models using logistic regression (MLogRM) and random forest models (RFM) are increasingly deployed in industry for the purpose of binary classification. The European Commission’s proposed Artificial Intelligence Act (AIA) necessitates, under certain conditions, that application of such models is fair, transparent, and ethical, which consequently implies technical assessment of these models. This paper proposes and demonstrates an audit framework for technical assessment of RFMs and MLogRMs by focussing on model-, discrimination-, and transparency & explainability-related aspects. To measure these aspects 20 KPIs are proposed, which are paired to a traffic light risk assessment method. An open-source dataset is used to train a RFM and a MLogRM model and these KPIs are computed and compared with the traffic lights. The performance of popular explainability methods such as kernel-and tree-SHAP are assessed. The framework is expected to assist regulatory bodies in performing conformity assessments of binary classifiers and also benefits providers and users deploying such AI-systems to comply with the AIA.

Cite

CITATION STYLE

APA

Bhaumik, D., & Dey, D. (2023). An Audit Framework for Technical Assessment of Binary Classifiers. In International Conference on Agents and Artificial Intelligence (Vol. 2, pp. 312–324). Science and Technology Publications, Lda. https://doi.org/10.5220/0011744600003393

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free