Fair Without Leveling Down: A New Intersectional Fairness Definition

6Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work, we consider the problem of intersectional group fairness in the classification setting, where the objective is to learn discrimination-free models in the presence of several intersecting sensitive groups. First, we illustrate various shortcomings of existing fairness measures commonly used to capture intersectional fairness. Then, we propose a new definition called the α-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups and can be seen as a generalization of the notion of differential fairness. We highlight several desirable properties of the proposed definition and analyze its relation to other fairness measures. Finally, we benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline. Our results reveal that the increase in fairness measured by previous definitions hides a “leveling down” effect, i.e., degrading the best performance over groups rather than improving the worst one.

Cite

CITATION STYLE

APA

Maheshwari, G., Bellet, A., Denis, P., & Keller, M. (2023). Fair Without Leveling Down: A New Intersectional Fairness Definition. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 9018–9032). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.558

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free