AFLGuard: Byzantine-robust Asynchronous Federated Learning

19Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly learn a model with the help of a cloud server. A fundamental challenge of FL is that the clients are often heterogeneous, e.g., they have different computing powers, and thus the clients may send model updates to the server with substantially different delays. Asynchronous FL aims to address this challenge by enabling the server to update the model once any client's model update reaches it without waiting for other clients' model updates. However, like synchronous FL, asynchronous FL is also vulnerable to poisoning attacks, in which malicious clients manipulate the model via poisoning their local data and/or model updates sent to the server. Byzantine-robust FL aims to defend against poisoning attacks. In particular, Byzantine-robust FL can learn an accurate model even if some clients are malicious and have Byzantine behaviors. However, most existing studies on Byzantine-robust FL focused on synchronous FL, leaving asynchronous FL largely unexplored. In this work, we bridge this gap by proposing AFLGuard, a Byzantine-robust asynchronous FL method. We show that, both theoretically and empirically, AFLGuard is robust against various existing and adaptive poisoning attacks (both untargeted and targeted). Moreover, AFLGuard outperforms existing Byzantine-robust asynchronous FL methods.

References Powered by Scopus

Deep residual learning for image recognition

177094Citations
N/AReaders
Get full text

High-dimensional statistics: A non-asymptotic viewpoint

1388Citations
N/AReaders
Get full text

Multi-class texture analysis in colorectal cancer histology

412Citations
N/AReaders
Get full text

Cited by Powered by Scopus

RFed: Robustness-Enhanced Privacy-Preserving Federated Learning Against Poisoning Attack

7Citations
N/AReaders
Get full text

MODA: Model Ownership Deprivation Attack in Asynchronous Federated Learning

5Citations
N/AReaders
Get full text

Better Safe Than Sorry: Constructing Byzantine-Robust Federated Learning with Synthesized Trust

4Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Fang, M., Liu, J., Gong, N. Z., & Bentley, E. S. (2022). AFLGuard: Byzantine-robust Asynchronous Federated Learning. In ACM International Conference Proceeding Series (pp. 632–646). Association for Computing Machinery. https://doi.org/10.1145/3564625.3567991

Readers over time

‘22‘23‘24‘25036912

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3

75%

Lecturer / Post doc 1

25%

Readers' Discipline

Tooltip

Computer Science 3

75%

Engineering 1

25%

Save time finding and organizing research with Mendeley

Sign up for free
0