STAFL: Staleness-Tolerant Asynchronous Federated Learning on Non-iid Dataset

7Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

With the development of the Internet of Things, edge computing applications are paying more and more attention to privacy and real-time. Federated learning, a promising machine learning method that can protect user privacy, has begun to be widely studied. However, traditional syn-chronous federated learning methods are easily affected by stragglers, and non-independent and identically distributed data sets will also reduce the convergence speed. In this paper, we propose an asynchronous federated learning method, STAFL, where users can upload their updates at any time and the server will immediately aggregate the updates and return the latest global model. Secondly, STAFL will judge the user’s data distribution according to the user’s update and dynamically change the aggregation parameters according to the user’s network weight and staleness to minimize the impact of non-independent and identically distributed data sets on asynchronous updates. The experimental results show that our method performs better on non-independent and identically distributed data sets than existing methods.

Cite

CITATION STYLE

APA

Zhu, F., Hao, J., Chen, Z., Zhao, Y., Chen, B., & Tan, X. (2022). STAFL: Staleness-Tolerant Asynchronous Federated Learning on Non-iid Dataset. Electronics (Switzerland), 11(3). https://doi.org/10.3390/electronics11030314

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free