FedDefender: Backdoor Attack Defense in Federated Learning

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Federated Learning (FL) is a privacy-preserving distributed machine learning technique that enables individual clients (e.g., user participants, edge devices, or organizations) to train a model on their local data in a secure environment and then share the trained model with an aggregator to build a global model collaboratively. In this work, we propose FedDefender, a defense mechanism against targeted poisoning attacks in FL by leveraging differential testing. FedDefender first applies differential testing on clients' models using a synthetic input. Instead of comparing the output (predicted label), which is unavailable for synthetic input, FedDefender fingerprints the neuron activations of clients' models to identify a potentially malicious client containing a backdoor. We evaluate FedDefender using MNIST and FashionMNIST datasets with 20 and 30 clients, and our results demonstrate that FedDefender effectively mitigates such attacks, reducing the attack success rate (ASR) to 10% without deteriorating the global model performance.

Cite

CITATION STYLE

APA

Gill, W., Anwar, A., & Gulzar, M. A. (2023). FedDefender: Backdoor Attack Defense in Federated Learning. In SE4SafeML 2023 - Proceedings of the 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, Co-located with: ESEC/FSE 2023 (pp. 6–9). Association for Computing Machinery, Inc. https://doi.org/10.1145/3617574.3617858

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free