Towards accurate low bit-width quantization with multiple phase adaptations

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Low bit-width model quantization is highly desirable when deploying a deep neural network on mobile and edge devices. Quantization is an effective way to reduce the model size with low bit-width weight representation. However, the unacceptable accuracy drop hinders the development of this approach. One possible reason for this is that the weights in quantization intervals are directly assigned to the center. At the same time, some quantization applications are limited by the various of different network models. Accordingly, in this paper, we propose Multiple Phase Adaptations (MPA), a framework designed to address these two problems. Firstly, weights in the target interval are assigned to center by gradually spreading the quantization range. During the MPA process, the accuracy drop can be compensated for the unquantized parts. Moreover, as MPA does not introduce hyperparameters that depend on different models or bit-width, the framework can be conveniently applied to various models. Extensive experiments demonstrate that MPA achieves higher accuracy than most existing methods on classification tasks for AlexNet, VGG-16 and ResNet.

Cite

CITATION STYLE

APA

Yan, Z., Shi, Y., Wang, Y., Tan, M., Li, Z., Tan, W., & Tian, Y. (2020). Towards accurate low bit-width quantization with multiple phase adaptations. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6591–6598). AAAI press. https://doi.org/10.1609/aaai.v34i04.6134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free