Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.
CITATION STYLE
Goel, N., Amayuelas, A., Deshpande, A., & Sharma, A. (2021). The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 9A, pp. 7564–7573). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i9.16926
Mendeley helps you to discover research relevant for your work.