Weight quantization for multi-layer perceptrons using soft weight sharing

8Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel approach for quantizing the weights of a multi-layer perceptron (MLP) for efficient VLSI implementation. Our approach uses soft weight sharing, previously proposed for improved generalization and considers the weights not as constant numbers but as random variables drawn from a Gaussian mixture distribution; which includes as its special cases k-means clustering and uniform quantization. This approach couples the training of weights for reduced error with their quantization. Simulations on synthetic and real regression and classification data sets compare various quantization schemes and demonstrate the advantage of the coupled training of distribution parameters.

Cite

CITATION STYLE

APA

Köksal, F., Alpaydın, E., & Dündar, G. (2001). Weight quantization for multi-layer perceptrons using soft weight sharing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2130, pp. 211–216). Springer Verlag. https://doi.org/10.1007/3-540-44668-0_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free