Trade-Offs Between Fairness and Privacy in Language Modeling

2Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Protecting privacy in contemporary NLP models is gaining in importance. So does the need to mitigate social biases of such models. But can we have both at the same time? Existing research suggests that privacy preservation comes at the price of worsening biases in classification tasks. In this paper, we explore the extent to which this tradeoff really holds when we incorporate both privacy preservation and de-biasing techniques into training text generation models. How does improving the model along one dimension affect the other dimension as well as the utility of the model? We conduct an extensive set of experiments that include bias detection, privacy attacks, language modeling, and performance on downstream tasks.

Cite

CITATION STYLE

APA

Matzken, C., Eger, S., & Habernal, I. (2023). Trade-Offs Between Fairness and Privacy in Language Modeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 6948–6969). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.434

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free