Defending Against Poisoning Attacks in Federated Learning with Blockchain

4Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

In the era of deep learning, federated learning (FL) presents a promising approach that allows multiinstitutional data owners, or clients, to collaboratively train machine learning models without compromising data privacy. However, most existing FL approaches rely on a centralized server for global model aggregation, leading to a single point of failure. This makes the system vulnerable to malicious attacks when dealing with dishonest clients. In this work, we address this problem by proposing a secure and reliable FL system based on blockchain and distributed ledger technology. Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors. Both theoretical and empirical analyses are presented to demonstrate the effectiveness of the proposed approach, showing that our framework is robust against malicious client-side behaviors.

Cite

CITATION STYLE

APA

Dong, N., Wang, Z., Sun, J., Kampffmeyer, M., Knottenbelt, W., & Xing, E. (2024). Defending Against Poisoning Attacks in Federated Learning with Blockchain. IEEE Transactions on Artificial Intelligence, 5(7), 3743–3756. https://doi.org/10.1109/TAI.2024.3376651

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free