Promoting Fairness in Learned Models by Learning to Active Learn under Parity Constraints

11Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning models can have consequential effects when used to automate decisions, and disparities between groups of people in the error rates of those decisions can lead to harms suffered more by some groups than others. Past algorithmic approaches aim to enforce parity across groups given a fixed set of training data; instead, we ask: what if we can gather more data to mitigate disparities? We develop a meta-learning algorithm for parity-constrained active learning that learns a policy to decide which labels to query so as to maximize accuracy subject to parity constraints. To optimize the active learning policy, our proposed algorithm formulates the parity-constrained active learning task as a bi-level optimization problem. The inner level corresponds to training a classifier on a subset of labeled examples. The outer level corresponds to updating the selection policy choosing this subset to achieve a desired fairness and accuracy behavior on the trained classifier. To solve this constrained bi-level optimization problem, we employ the Forward-Backward Splitting optimization method. Empirically, across several parity metrics and classification tasks, our approach outperforms alternatives by a large margin.

Author supplied keywords

Cite

CITATION STYLE

APA

Sharaf, A., Daume, H., & Ni, R. (2022). Promoting Fairness in Learned Models by Learning to Active Learn under Parity Constraints. In ACM International Conference Proceeding Series (pp. 2149–2156). Association for Computing Machinery. https://doi.org/10.1145/3531146.3534632

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free