Fairness-Aware Learning with Prejudice Free Representations

4Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

Machine learning models are extensively being used to make decisions that have a significant impact on human life. These models are trained over historical data that may contain information about sensitive attributes such as race, sex, religion, etc. The presence of such sensitive attributes can impact certain population subgroups unfairly. It is straightforward to remove sensitive features from the data; however, a model could pick up prejudice from latent sensitive attributes that may exist in the training data. This has led to the growing apprehension about the fairness of the employed models. In this paper, we propose a novel algorithm that can effectively identify and treat latent discriminating features. The approach is agnostic of the learning algorithm and generalizes well for classification as well as regression tasks. It can also be used as a key aid in proving that the model is free of discrimination towards regulatory compliance if the need arises. The approach helps to collect discrimination-free features that would improve the model performance while ensuring the fairness of the model. The experimental results from our evaluations on publicly available real-world datasets show a near-ideal fairness measurement in comparison to other methods.

Cite

CITATION STYLE

APA

Madhavan, R., & Wadhwa, M. (2020). Fairness-Aware Learning with Prejudice Free Representations. In International Conference on Information and Knowledge Management, Proceedings (pp. 2137–2140). Association for Computing Machinery. https://doi.org/10.1145/3340531.3412150

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free