Training binary weight networks via semi-binary decomposition

4Citations
Citations of this article
110Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Recently binary weight networks have attracted lots of attentions due to their high computational efficiency and small parameter size. Yet they still suffer from large accuracy drops because of their limited representation capacity. In this paper, we propose a novel semi-binary decomposition method which decomposes a matrix into two binary matrices and a diagonal matrix. Since the matrix product of binary matrices has more numerical values than binary matrix, the proposed semi-binary decomposition has more representation capacity. Besides, we propose an alternating optimization method to solve the semi-binary decomposition problem while keeping binary constraints. Extensive experiments on AlexNet, ResNet-18, and ResNet-50 demonstrate that our method outperforms state-of-the-art methods by a large margin (5% higher in top1 accuracy). We also implement binary weight AlexNet on FPGA platform, which shows that our proposed method can achieve ∼ 9 × speed-ups while reducing the consumption of on-chip memory and dedicated multipliers significantly.

Cite

CITATION STYLE

APA

Hu, Q., Li, G., Wang, P., Zhang, Y., & Cheng, J. (2018). Training binary weight networks via semi-binary decomposition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11217 LNCS, pp. 657–673). Springer Verlag. https://doi.org/10.1007/978-3-030-01261-8_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free