Achieving non-discrimination in prediction

24Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In discrimination-aware classification, the pre-process methods for constructing a discrimination-free classifier first remove discrimination from the training data, and then learn the classifier from the cleaned data. However, they lack a theoretical guarantee for the potential discrimination when the classifier is deployed for prediction. In this paper, we fill this gap by mathematically bounding the discrimination in prediction. We adopt the causal model for modeling the data generation mechanism, and formally defining discrimination in population, in a dataset, and in prediction. We obtain two important theoretical results: (1) the discrimination in prediction can still exist even if the discrimination in the training data is completely removed; and (2) not all pre-process methods can ensure non-discrimination in prediction even though they can achieve non-discrimination in the modified training data. Based on the results, we develop a two-phase framework for constructing a discrimination-free classifier with a theoretical guarantee. The experiments demonstrate the theoretical results and show the effectiveness of our two-phase framework.

Cite

CITATION STYLE

APA

Zhang, L., Wu, Y., & Wu, X. (2018). Achieving non-discrimination in prediction. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 3097–3103). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/430

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free