Federated learning (FL) was once considered secure for keeping clients' raw data locally without relaying on a central server. However, the transmitted model weights or gradients still reveal private information, which can be exploited to launch various inference attacks. Moreover, FL based on deep neural networks is prone to the curse of dimensionality. In this paper, we propose a compressed and privacy-preserving FL scheme in DNN architecture by using Compressive sensing and Adaptive local differential privacy (called as CAFL). Specifically, we first compress the local models by using Compressive Sensing (CS), then adaptively perturb the remaining weights according to their different centers of variation ranges in different layers and their own offsets from corresponding range centers by using Local Differential Privacy (LDP), finally reconstruct the global model almost perfectly by using the reconstruction algorithm of CS. Formal security analysis shows that our scheme achieves eμ-LDP security and introduces zero bias to estimating average weights. Extensive experiments using MINIST and Fashion-MINIST datasets demonstrate that our scheme with minimum compression ratio 0.05 can reduce the number of parameters by 95%, and with a lower privacy budget eμ = 1 can improve the accuracy by 80% on MINIST and 12.7% on Fashion-MINIST compared with state-of-the-art schemes.
CITATION STYLE
Miao, Y., Xie, R., Li, X., Liu, X., Ma, Z., & Deng, R. H. (2022). Compressed Federated Learning Based on Adaptive Local Differential Privacy. In ACM International Conference Proceeding Series (pp. 159–170). Association for Computing Machinery. https://doi.org/10.1145/3564625.3567973
Mendeley helps you to discover research relevant for your work.