Neuron Specific Pruning for Communication Efficient Federated Learning

9Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Federated Learning (FL) is a distributed training framework where a model is collaboratively trained over a set of clients without communicating their private data to the central server. However, each client shares the parameters of its local model. The first challenge faced by the FL is high communication cost due to the size of Deep Neural Network (DNN) models. Pruning is an efficient technique to reduce the number of parameters in DNN models, in which insignificant neurons are removed from the model. This paper introduces a federated pruning method based on Neuron Importance Scope Propagation (NISP) algorithm. The importance scores of output layer neurons are back-propagated layer-wise to every neuron in the network. The central server iteratively broadcasts the sparsified weights to all selected clients. Then, each participating client intermittently downloads the mask vector and reconstructs the weights in their original form. The locally updated model is pruned using the mask vector and shared with the server. After receiving model updates from each participating client, the server reconstructs and aggregates the weights. Experiments on MNIST and CIFAR10 datasets demonstrate that the proposed approach achieves accuracy close to Federated Averaging (FedAvg) algorithm with less communication cost.

Cite

CITATION STYLE

APA

Kumar, G., & Toshniwal, D. (2022). Neuron Specific Pruning for Communication Efficient Federated Learning. In International Conference on Information and Knowledge Management, Proceedings (pp. 4148–4152). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557658

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free