Defend Against Poisoning Attacks in Federated Learning

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The rapid development of artificial intelligence not only brings convenience to people’s lives, but also leads to many privacy leaks. In order to resolve the contradiction between data availability and privacy protection, Google proposed the framework of federated learning. In federated learning, local clients upload model update values to the server, and the server aggregates all update values to obtain a new global model. However malicious attackers can upload malicious update values to perform poisoning attacks, making the global model unavailable or introducing a backdoor. In this paper, we deploy an AutoEncoder on the server side to calculate the reconstruction error of model update values. According to the size of the reconstruction error, we remove malicious update values and retain benign update values. Finally, the server aggregates all benign update values to obtain a new global model. Experimental results show that our scheme can effectively defend against poisoning attacks.

Cite

CITATION STYLE

APA

Zhu, C., Ge, J., & Xu, Y. (2021). Defend Against Poisoning Attacks in Federated Learning. In Communications in Computer and Information Science (Vol. 1454 CCIS, pp. 239–249). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-16-7502-7_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free