Abstract
As an alternative to conventional single-instruction-multiple-data (SIMD) mode solutions with massive parallelism for self-organizing-map (SOM) neural network models, this paper reports a memory-based proposal for the learning vector quantization (LVQ), which is a variant of SOM. A dualmode LVQ system, enabling both on-chip learning and classification, is implemented by using a reconfigurable pipeline with parallel p-word input (R-PPPI) architecture. As a consequence of the reuse of R-PPPI for solving the most severe computational demands in both modes, power dissipation and Si-area consumption can be dramatically reduced in comparison to previous LVQ implementations. In addition, the designed LVQ ASIC has high flexibility with respect to feature-vector dimensionality and reference-vector number, allowing the execution of many different machine-learning applications. The fabricated test chip in 180nm CMOS with parallel 8-word inputs and 102 K-bit on-chip memory achieves low power consumption of 66.38mW (at 75MHz and 1.8 V) and high learning speed of (R+1) × [d/8] + 10 clock cycles per d-dimensional sample vector where R is the reference-vector number.
Cite
CITATION STYLE
Zhang, X., An, F., Chen, L., & Mattausch, H. J. (2016). Reconfigurable VLSI implementation for learning vector quantization with on-chip learning circuit. Japanese Journal of Applied Physics, 55(4). https://doi.org/10.7567/JJAP.55.04EF02
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.