Minimax Group Fairness: Algorithms and Experiments

82Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider a recently introduced framework in which fairness is measured by worst-case outcomes across groups, rather than by the more standard differences between group outcomes. In this framework we provide provably convergent oracle-efficient learning algorithms (or equivalently, reductions to non-fair learning) for minimax group fairness. Here the goal is that of minimizing the maximum loss across all groups, rather than equalizing group losses. Our algorithms apply to both regression and classification settings and support both overall error and false positive or false negative rates as the fairness measure of interest. They also support relaxations of the fairness constraints, thus permitting study of the tradeoff between overall accuracy and minimax fairness. We compare the experimental behavior and performance of our algorithms across a variety of fairness-sensitive data sets and show empirical cases in which minimax fairness is strictly and strongly preferable to equal outcome notions.

Cite

CITATION STYLE

APA

Diana, E., Gill, W., Kearns, M., Kenthapadi, K., & Roth, A. (2021). Minimax Group Fairness: Algorithms and Experiments. In AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 66–76). Association for Computing Machinery, Inc. https://doi.org/10.1145/3461702.3462523

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free