A Federated Learning Framework against Data Poisoning Attacks on the Basis of the Genetic Algorithm

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Obtaining the balance between information loss and training accuracy is crucial in federated learning. Nevertheless, inadequate data quality will affect training accuracy. Here, to improve the training accuracy without affecting information loss, we propose a malicious data detection model using the genetic algorithm to resist model poisoning attack. Specifically, the model consists of three modules: (1) Participants conduct single point training on data and upload accuracy to the third-party server; (2) Formulate data scoring formula based on data quantity and quality; (3) Use the genetic algorithm to obtain the threshold which makes the score highest. Data with accuracy which exceeds this threshold can participate in cooperative training of federated learning. Before participating in training, participants’ data is optimized to oppose data poisoning attacks. Experiments on two datasets validated the effectiveness of the proposed model. It was also verified in the fashion-MNIST data set and cifar10 data set that the training accuracy of GAFL is 7.45% higher than that of the federated learning model in the fashion-MNIST data set and 8.18% in the cifar10 data set.

Cite

CITATION STYLE

APA

Zhai, R., Chen, X., Pei, L., & Ma, Z. (2023). A Federated Learning Framework against Data Poisoning Attacks on the Basis of the Genetic Algorithm. Electronics (Switzerland), 12(3). https://doi.org/10.3390/electronics12030560

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free