Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research

45Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Across machine learning (ML) sub-disciplines, researchers make explicit mathematical assumptions in order to facilitate proof-writing. We note that, specifically in the area of fairness-accuracy trade-off optimization scholarship, similar attention is not paid to the normative assumptions that ground this approach. Such assumptions presume that 1) accuracy and fairness are in inherent opposition to one another, 2) strict notions of mathematical equality can adequately model fairness, 3) it is possible to measure the accuracy and fairness of decisions independent from historical context, and 4) collecting more data on marginalized individuals is a reasonable solution to mitigate the effects of the trade-off. We argue that such assumptions, which are often left implicit and unexamined, lead to inconsistent conclusions: While the intended goal of this work may be to improve the fairness of machine learning models, these unexamined, implicit assumptions can in fact result in emergent unfairness. We conclude by suggesting a concrete path forward toward a potential resolution.

Cite

CITATION STYLE

APA

Cooper, A. F., Abrams, E., & Na, N. A. (2021). Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research. In AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 46–54). Association for Computing Machinery, Inc. https://doi.org/10.1145/3461702.3462519

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free