Robust Fairness under Covariate Shift

64Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

Making predictions that are fair with regard to protected attributes (race, gender, age, etc.) has become an important requirement for classification algorithms. Existing techniques derive a fair model from sampled labeled data relying on the assumption that training and testing data are identically and independently drawn (iid) from the same distribution. In practice, distribution shift can and does occur between training and testing datasets as the characteristics of individuals interacting with the machine learning system change. We investigate fairness under covariate shift, a relaxation of the iid assumption in which the inputs or covariates change while the conditional label distribution remains the same. We seek fair decisions under these assumptions on target data with unknown labels. We propose an approach that obtains the predictor that is robust to the worst-case testing performance while satisfying target fairness requirements and matching statistical properties of the source data. We demonstrate the benefits of our approach on benchmark prediction tasks.

Cite

CITATION STYLE

APA

Rezaei, A., Liu, A., Memarrast, O., & Ziebart, B. D. (2021). Robust Fairness under Covariate Shift. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (pp. 9419–9427). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i11.17135

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free