A new learning algorithm for neural networks with integer weights and quantized non-linear activation functions

9Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The hardware implementation of neural networks is a fascinating area of research with for reaching applications. However, the real weights and non-linear activation function are not suited for hardware implementation. A new learning algorithm, which trains neural networks with integer weights and excludes derivatives from the training process, is presented in this paper. The performance of this procedure was evaluated by comparing to multi-threshold method and continuous discrete learning method on XOR and function approximation problems, and the simulation results show the new learning method outperforms the other two greatly in convergence and generalization. © 2008 International Federation for Information Processing.

Cite

CITATION STYLE

APA

Yi, Y., Hangping, Z., & Bin, Z. (2008). A new learning algorithm for neural networks with integer weights and quantized non-linear activation functions. In IFIP International Federation for Information Processing (Vol. 276, pp. 427–431). https://doi.org/10.1007/978-0-387-09695-7_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free