Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?

24Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML). Specifically, we study if a distributed implementation of the renowned Stochastic Gradient Descent (SGD) learning algorithm is feasible withboth differential privacy (DP) and (α,f)-Byzantine resilience. To the best of our knowledge, this is the first work to tackle this problem from a theoretical point of view. A key finding of our analyses is that the classical approaches to these two (seemingly) orthogonal issues are incompatible. More precisely, we show that a direct composition of these techniques makes the guarantees of the resulting SGD algorithm depend unfavourably upon the number of parameters of the ML model, making the training of large models practically infeasible. We validate our theoretical results through numerical experiments on publicly-available datasets; showing that it is impractical to ensure DP and Byzantine resilience simultaneously.

Cite

CITATION STYLE

APA

Guerraoui, R., Gupta, N., Pinot, R., Rouault, S., & Stephan, J. (2021). Differential Privacy and Byzantine Resilience in SGD: Do They Add Up? In Proceedings of the Annual ACM Symposium on Principles of Distributed Computing (pp. 391–401). Association for Computing Machinery. https://doi.org/10.1145/3465084.3467919

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free