Bayesian modeling of intersectional fairness: The variance of bias

27Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

Intersectionality is a framework that analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including race, gender, sexual orientation, class, and disability. Intersectionality theory therefore implies it is important that fairness in artificial intelligence systems be protected with regard to multi-dimensional protected attributes. However, the measurement of fairness becomes statistically challenging in the multi-dimensional setting due to data sparsity, which increases rapidly in the number of dimensions, and in the values per dimension. We present a Bayesian probabilistic modeling approach for the reliable, data-efficient estimation of fairness with multidimensional protected attributes, which we apply to two existing intersectional fairness metrics. Experimental results on census data and the COMPAS criminal justice recidivism dataset demonstrate the utility of our methodology, and show that Bayesian methods are valuable for the modeling and measurement of fairness in intersectional contexts.

Cite

CITATION STYLE

APA

Foulds, J. R., Islam, R., Keya, K. N., & Pan, S. (2020). Bayesian modeling of intersectional fairness: The variance of bias. In Proceedings of the 2020 SIAM International Conference on Data Mining, SDM 2020 (pp. 424–432). Society for Industrial and Applied Mathematics Publications. https://doi.org/10.1137/1.9781611976236.48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free