We studied robust quantization of deep neural networks (DNNs) for embedded devices. Existing compression techniques often generate DNNs that are sensitive to external errors. Because embedded devices may be affected by external lights and outside weather, DNNs running on those devices must be robust to such errors. For robust quantization of DNNs, we formulate an optimization problem that finds the bit width for each layer minimizing the robustness loss. To efficiently find the solution, we design a dynamic programming based algorithm, called Qed. We also propose an incremental algorithm, Q∗ that quickly finds a reasonably robust quantization and then gradually improves it. We have evaluated Qed and Q∗ with three DNN models (LeNet, AlexNet, and VGG-16) and with Gaussian random errors and realistic errors. For comparison, we also evaluate universal quantization that uses equal bit width for all layers and Deep Compression, a weight-sharing based compression technique. When tested with increasing size of errors, Qed most robustly gives correct inference output. Even if a DNN is optimized for robustness, its quantizations may not be robust unless Qed is used. Moreover, we evaluate Q∗ for its trade off in execution time and robustness. In one tenth of Qed's execution time, Q∗ gives a quantization 98% as robust as the one by Qed.
CITATION STYLE
Kim, Y., Lee, J., Kim, Y., & Seo, J. (2020). Robust quantization of deep neural networks. In CC 2020 - Proceedings of the 29th International Conference on Compiler Construction (pp. 74–84). Association for Computing Machinery, Inc. https://doi.org/10.1145/3377555.3377900
Mendeley helps you to discover research relevant for your work.