GAN-Driven Data Poisoning Attacks and Their Mitigation in Federated Learning Systems

8Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Federated learning (FL) is an emerging machine learning technique where machine learning models are trained in a decentralized manner. The main advantage of this approach is the data privacy it provides because the data are not processed in a centralized device. Moreover, the local client models are aggregated on a server, resulting in a global model that has accumulated knowledge from all the different clients. This approach, however, is vulnerable to attacks because clients can be malicious or malicious actors may interfere within the network. In the first case, these types of attacks may refer to data or model poisoning attacks where the data or model parameters, respectively, may be altered. In this paper, we investigate the data poisoning attacks and, more specifically, the label-flipping case within a federated learning system. For an image classification task, we introduce two variants of data poisoning attacks, namely model degradation and targeted label attacks. These attacks are based on synthetic images generated by a generative adversarial network (GAN). This network is trained jointly by the malicious clients using a concatenated malicious dataset. Due to dataset sample limitations, the architecture and learning procedure of the GAN are adjusted accordingly. Through the experiments, we demonstrate that these types of attacks are effective in achieving their task and managing to fool common federated defenses (stealth). We also propose a mechanism to mitigate these attacks based on clean label training on the server side. In more detail, we see that the model degradation attack causes an accuracy degradation of up to 25%, while common defenses can only alleviate this for a percentage of ∼5%. Similarly, the targeted label attack results in a misclassification of 56% compared to 2.5% when no attack takes place. Moreover, our proposed defense mechanism is able to mitigate these attacks.

References Powered by Scopus

Generative adversarial networks

8856Citations
N/AReaders
Get full text

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

3392Citations
N/AReaders
Get full text

Beyond Inferring Class Representatives: User-Level Privacy Leakage from Federated Learning

675Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Malware detection for mobile computing using secure and privacy-preserving machine learning approaches: A comprehensive survey

12Citations
N/AReaders
Get full text

Enhancing Security and Privacy in Healthcare with Generative Artificial Intelligence-Based Detection and Mitigation of Data Poisoning Attacks Software

3Citations
N/AReaders
Get full text

Fostering Trustworthiness of Federated Learning Ecosystem through Realistic Scenarios

2Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Psychogyios, K., Velivassaki, T. H., Bourou, S., Voulkidis, A., Skias, D., & Zahariadis, T. (2023). GAN-Driven Data Poisoning Attacks and Their Mitigation in Federated Learning Systems. Electronics (Switzerland), 12(8). https://doi.org/10.3390/electronics12081805

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 8

80%

Lecturer / Post doc 1

10%

Researcher 1

10%

Readers' Discipline

Tooltip

Engineering 6

60%

Computer Science 4

40%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free