Identifying Reasons for Bias: An Argumentation-Based Approach

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

As algorithmic decision-making systems become more prevalent in society, ensuring the fairness of these systems is becoming increasingly important. Whilst there has been substantial research in building fair algorithmic decision-making systems, the majority of these methods require access to the training data, including personal characteristics, and are not transparent regarding which individuals are classified unfairly. In this paper, we propose a novel model-agnostic argumentation-based method to determine why an individual is classified differently in comparison to similar individuals. Our method uses a quantitative argumentation framework to represent attribute-value pairs of an individual and of those similar to them, and uses a well-known semantics to identify the attribute-value pairs in the individual contributing most to their different classification. We evaluate our method on two datasets commonly used in the fairness literature and illustrate its effectiveness in the identification of bias.

Cite

CITATION STYLE

APA

Waller, M., Rodrigues, O., & Cocarascu, O. (2024). Identifying Reasons for Bias: An Argumentation-Based Approach. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 21664–21672). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i19.30165

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free