Efficient and Less Centralized Federated Learning

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the rapid growth in mobile computing, massive amounts of data and computing resources are now located at the edge. To this end, Federated learning (FL) is becoming a widely adopted distributed machine learning (ML) paradigm, which aims to harness this expanding skewed data locally in order to develop rich and informative models. In centralized FL, a collection of devices collaboratively solve a ML task under the coordination of a central server. However, existing FL frameworks make an over-simplistic assumption about network connectivity and ignore the communication bandwidth of the different links in the network. In this paper, we present and study a novel FL algorithm, in which devices mostly collaborate with other devices in a pairwise manner. Our nonparametric approach is able to exploit network topology to reduce communication bottlenecks. We evaluate our approach on various FL benchmarks and demonstrate that our method achieves 10 × better communication efficiency and around 8% increase in accuracy compared to the centralized approach.

Cite

CITATION STYLE

APA

Chou, L., Liu, Z., Wang, Z., & Shrivastava, A. (2021). Efficient and Less Centralized Federated Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12975 LNAI, pp. 772–787). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-86486-6_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free