Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning

12Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Distributed stochastic algorithms, equipped with gradient compression techniques, such as codebook quantization, are becoming increasingly popular and considered state-of-the-art in training large deep neural network (DNN) models. However, communicating the quantized gradients in a network requires efficient encoding techniques. For this, practitioners generally use Elias encoding-based techniques without considering their computational overhead or data-volume. In this paper, based on Huffman coding, we propose several lossless encoding techniques that exploit different characteristics of the quantized gradients during distributed DNN training. Then, we show their effectiveness on 5 different DNN models across three different data-sets, and compare them with classic state-of-the-art Elias-based encoding techniques. Our results show that the proposed Huffman-based encoders (i.e., RLH, SH, and SHS) can reduce the encoded data-volume by up to 5.1×, 4.32×, and 3.8×, respectively, compared to the Elias-based encoders.

Cite

CITATION STYLE

APA

Gajjala, R. R., Banchhor, S., Abdelmoniem, A. M., Dutta, A., Canini, M., & Kalnis, P. (2020). Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning. In DistributedML 2020 - Proceedings of the 2020 1st Workshop on Distributed Machine Learning (pp. 21–27). Association for Computing Machinery, Inc. https://doi.org/10.1145/3426745.3431334

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free