Wasserstein-based fairness interpretability framework for machine learning models

5Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The objective of this article is to introduce a fairness interpretability framework for measuring and explaining the bias in classification and regression models at the level of a distribution. In our work, we measure the model bias across sub-population distributions in the model output using the Wasserstein metric. To properly quantify the contributions of predictors, we take into account favorability of both the model and predictors with respect to the non-protected class. The quantification is accomplished by the use of transport theory, which gives rise to the decomposition of the model bias and bias explanations to positive and negative contributions. To gain more insight into the role of favorability and allow for additivity of bias explanations, we adapt techniques from cooperative game theory.

Cite

CITATION STYLE

APA

Miroshnikov, A., Kotsiopoulos, K., Franks, R., & Ravi Kannan, A. (2022). Wasserstein-based fairness interpretability framework for machine learning models. Machine Learning, 111(9), 3307–3357. https://doi.org/10.1007/s10994-022-06213-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free