Efficient Differential Privacy Federated Learning Mechanism for Intelligent Selection of Optimal Privacy Protection Levels

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Differential privacy (DP) is considered as an effective privacy-preserving method in federation learning to defend against privacy attacks. However, recent studies have shown that it can be exploited to perform security attacks (e.g., false data injection attacks), leading to degraded Federated Learning (FL) performance. In this paper, we systematically study poisoning attacks using the DP mechanism in order to perform them from an adversarial perspective. We demonstrate that although the DP mechanism provides a certain degree of privacy assurance, it can also be a vector for poisoning attacks by adversaries. As a countermeasure, we propose FedEDP, a concise and effective differential privacy federation learning (DPFL) algorithm that uses the parameters and losses of differential privacy to intelligently generate an optimal privacy level for edge nodes (clients) to defend against possible poisoning attacks. We conducted experiments on the datasets MNIST and CIFAR10, respectively, and the experimental results showthat FedEDP significantly improves the privacy-utility trade-off over the state-of-theart in DPFL.

Cite

CITATION STYLE

APA

Gao, M., Zuo, F., & Wang, G. (2022). Efficient Differential Privacy Federated Learning Mechanism for Intelligent Selection of Optimal Privacy Protection Levels. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13579 LNCS, pp. 603–614). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20309-1_53

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free