Neural network classification: Maximizing zero-error density

11Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a new cost function for neural network classification: the error density at the origin. This method provides a simple objective function that can be easily plugged in the usual backpropagation algorithm, giving a simple and efficient learning scheme. Experimental work shows the effectiveness and superiority of the proposed method when compared to the usual mean square error criteria in four well known datasets. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Silva, L. M., Alexandre, L. A., & De Sá, J. M. (2005). Neural network classification: Maximizing zero-error density. In Lecture Notes in Computer Science (Vol. 3686, pp. 127–135). Springer Verlag. https://doi.org/10.1007/11551188_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free