Constraining deep representations with a noise module for fair classification

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The recent surge in interest for Deep Learning (motivated by its exceptional performances on longstanding problems) made Neural Networks a very appealing tool for many actors in our society. One issue in this shift of interest is that Neural Networks are very opaque objects and it is often hard to make sense of their predictions. In this context, research efforts have focused on building fair representations of data which display little to no correlation with regard to a sensitive attribute s. In this paper we build onto a domain adaptation neural model by augmenting it with a "noise conditioning" mechanism which we show is instrumental in obtaining fair (i.e. non-correlated with s) representations. We provide experiments on standard datasets showing the effectiveness of the noise conditioning mechanism in helping the networks to ignore the sensible attribute.

Cite

CITATION STYLE

APA

Cerrato, M., Esposito, R., & Puma, L. L. (2020). Constraining deep representations with a noise module for fair classification. In Proceedings of the ACM Symposium on Applied Computing (pp. 470–472). Association for Computing Machinery. https://doi.org/10.1145/3341105.3374090

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free