Fair NLP Models with Differentially Private Text Encoders

8Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Encoded text representations often capture sensitive attributes about individuals (e.g., race or gender), which raise privacy concerns and can make downstream models unfair to certain groups. In this work, we propose FEDERATE, an approach that combines ideas from differential privacy and adversarial training to learn private text representations which also induces fairer models. We empirically evaluate the trade-off between the privacy of the representations and the fairness and accuracy of the downstream model on four NLP datasets. Our results show that FEDERATE consistently improves upon previous methods, and thus suggest that privacy and fairness can positively reinforce each other.

Cite

CITATION STYLE

APA

Maheshwari, G., Denis, P., Keller, M., & Bellet, A. (2022). Fair NLP Models with Differentially Private Text Encoders. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 6942–6959). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.512

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free