NestFL: Efficient federated learning through progressive model pruning in heterogeneous edge computing

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present NestFL, a learning-efficient FL framework for edge computing, which can jointly improve the training efficiency and achieve personalization. Specifically, NestFL takes the runtime resources of the edge devices into consideration and assigns each device a sparse-structured subnetwork by progressively performing the structured pruning. During training, only the updates of these subnetworks are transmitted to the central server. Additionally, these generated subnetworks adopt a structure-and parameter-sharing mechanism, making themselves nested inside a multi-capacity global model. In doing so, the overall communication and computation costs can be significantly reduced, and each device can learn a personalized model without introducing extra parameters. Furthermore, a weighted aggregation mechanism is designed to improve the training performance and maximally preserve personalization.

Cite

CITATION STYLE

APA

Zhou, X., Jia, Q., & Xie, R. (2022). NestFL: Efficient federated learning through progressive model pruning in heterogeneous edge computing. In Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM (pp. 817–819). Association for Computing Machinery. https://doi.org/10.1145/3495243.3558248

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free