We propose a new cost function for neural network classification: the error density at the origin. This method provides a simple objective function that can be easily plugged in the usual backpropagation algorithm, giving a simple and efficient learning scheme. Experimental work shows the effectiveness and superiority of the proposed method when compared to the usual mean square error criteria in four well known datasets. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Silva, L. M., Alexandre, L. A., & De Sá, J. M. (2005). Neural network classification: Maximizing zero-error density. In Lecture Notes in Computer Science (Vol. 3686, pp. 127–135). Springer Verlag. https://doi.org/10.1007/11551188_14
Mendeley helps you to discover research relevant for your work.