Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks

29Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work, besides improving prediction accuracy, we study whether personalization could bring robustness benefits to backdoor attacks. We conduct the first study of backdoor attacks in the pFL framework, testing 4 widely used backdoor attacks against 6 pFL methods on benchmark datasets FEMNIST and CIFAR-10, a total of 600 experiments. The study shows that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks. In contrast, pFL methods with full model-sharing do not show robustness. To analyze the reasons for varying robustness performances, we provide comprehensive ablation studies on different pFL methods. Based on our findings, we further propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks. We believe that our work could provide both guidance for pFL application in terms of its robustness and offer valuable insights to design more robust FL methods in the future. We open-source our code to establish the first benchmark for black-box backdoor attacks in pFL: https://github.com/alibaba/FederatedScope/tree/backdoor-bench.

Cite

CITATION STYLE

APA

Qin, Z., Yao, L., Chen, D., Li, Y., Ding, B., & Cheng, M. (2023). Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 4743–4755). Association for Computing Machinery. https://doi.org/10.1145/3580305.3599898

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free