Adaptive quantization for deep neural network

116Citations
Citations of this article
155Readers
Mendeley users who have this article in their library.

Abstract

In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, together with increasingly complex architectures. The performance gain of these DNNs generally comes with high computational costs and large memory consumption, which may not be affordable for mobile platforms. Deep model quantization can be used for reducing the computation and memory costs of DNNs, and deploying complex DNNs on mobile equipment. In this work, we propose an optimization framework for deep model quantization. First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy. Then, we propose an optimization process based on this measurement for finding optimal quantization bit-width for each layer. This is the first work that theoretically analyse the relationship between parameter quantization errors of individual layers and model accuracy. Our new quantization algorithm outperforms previous quantization optimization methods, and achieves 20-40% higher compression rate compared to equal bit-width quantization at the same model prediction accuracy.

Cite

CITATION STYLE

APA

Zhou, Y., Moosavi-Dezfooli, S. M., Cheung, N. M., & Frossard, P. (2018). Adaptive quantization for deep neural network. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 4596–4604). AAAI press. https://doi.org/10.1609/aaai.v32i1.11623

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free