Data sharing in the European Union (EU) has gained new momentum, among others for machine learning (ML) and artificial intelligence (AI) training purposes. By enabling models' training whilst preserving the privacy of data, Privacy Enhancing Technologies (PETs) have therefore gained popularity, especially among policymakers. So far, computer science research has focused on advancing state-of-the-art privacy engineering and exploring trade-offs between privacy and accuracy. Meanwhile, legal scholarship began investigating the challenges arising therefrom. Yet, few works have delved into the fairness implications of PETs. Further research is essential to both prevent the propagation of bias and discrimination and to limit the accumulation of market power within very few economic entities suitable to undermine fair competition and consumer rights. In our work, we will address this knowledge gap by adopting a legal and computer science point of view. After scoping our understanding of possible unfair sides of PETs based on technical and socio-legal understandings of fairness (Section 2), we provide an overview of PETs mostly relevant for ML and AI training (Section 3). We then discuss fairness-related challenges arising from their use (Section 4) and we suggest possible technical and regulatory (e.g., impact assessment, new rights) solutions to address the shortcomings identified (Section 5). We finally provide conclusions and ideas for future research (Section 6).
CITATION STYLE
Calvi, A., Malgieri, G., & Kotzinos, D. (2024). The unfair side of Privacy Enhancing Technologies: addressing the trade-offs between PETs and fairness. In 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024 (pp. 2047–2059). Association for Computing Machinery, Inc. https://doi.org/10.1145/3630106.3659024
Mendeley helps you to discover research relevant for your work.