SPoiL: Sybil-Based Untargeted Data Poisoning Attacks in Federated Learning

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Federated learning is widely used in mobile computing, the Internet of Things, and other scenarios due to its distributed and privacy-preserving nature. It allows mobile devices to train machine learning models collaboratively without sharing their local private data. However, during the model aggregation phase, federated learning is vulnerable to poisoning attacks carried out by malicious users. Furthermore, due to the heterogeneity of network status, communication conditions, hardware, and other factors, users are at high risk of offline, which allows attackers to fake virtual participants and increase the damage of poisoning. Unlike existing work, we focus on the more general case of untargeted poisoning attacks. In this paper, we propose novel sybil-based untargeted data poisoning attacks in federated learning (SPoiL), in which malicious users corrupt the performance of the global model by modifying the training data and increasing the probability of poisoning by virtualizing several sybil nodes. Finally, we validate the superiority of our attack approach through experiments across the commonly used datasets.

Cite

CITATION STYLE

APA

Lian, Z., Zhang, C., Nan, K., & Su, C. (2023). SPoiL: Sybil-Based Untargeted Data Poisoning Attacks in Federated Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13983 LNCS, pp. 235–248). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-39828-5_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free