Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization

37Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other. However, data locality does not provide sufficient privacy protection, and it is desirable to facilitate FL with rigorous differential privacy (DP) guarantee. Existing DP mechanisms would introduce random noise with magnitude proportional to the model size, which can be quite large in deep neural networks. In this paper, we propose a new FL framework with sparsification-amplified privacy. Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee. Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost. We rigorously analyze the convergence of our approach and utilize Renyi DP to tightly account the end-to-end DP guarantee. Extensive experiments on benchmark datasets validate that our approach outperforms previous differentially-private FL approaches in both privacy guarantee and communication efficiency.

Cite

CITATION STYLE

APA

Hu, R., Gong, Y., & Guo, Y. (2021). Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1463–1469). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/202

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free