Differentially Private and Communication Efficient Collaborative Learning

18Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Collaborative learning has received huge interests due to its capability of exploiting the collective computing power of the wireless edge devices. However, during the learning process, model updates using local private samples and large-scale parameter exchanges among agents impose severe privacy concerns and communication bottleneck. In this paper, to address these problems, we propose two differentially private (DP) and communication efficient algorithms, called Q-DPSGD-1 and Q-DPSGD-2. In Q-DPSGD-1, each agent first performs local model updates by a DP gradient descent method to provide the DP guarantee and then quantizes the local model before transmitting it to neighbors to improve communication efficiency. In Q-DPSGD-2, each agent injects discrete Gaussian noise to enforce DP guarantee after first quantizing the local model. Moreover, we track the privacy loss of both approaches under the Rényi DP and provide convergence analysis for both convex and non-convex loss functions. The proposed methods are evaluated in extensive experiments on real-world datasets and the empirical results validate our theoretical findings.

Cite

CITATION STYLE

APA

Ding, J., Liang, G., Bi, J., & Pan, M. (2021). Differentially Private and Communication Efficient Collaborative Learning. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 8B, pp. 7219–7227). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i8.16887

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free