Typically the response of a multilayered perceptron (MLP) network on points which are far away from the boundary of its training data is not very reliable. When test data points are far away from the boundary of its training data, the network should not make any decision on these points. We propose a training scheme for MLPs which tries to achieve this. Our methodology trains a composite network consisting of two subnetworks : a mapping network and a vigilance network. The mapping network learns the usual input-output relation present in the data and the vigilance network learns a decision boundary and decides on which points the mapping network should respond. Though here we propose the methodology for multilayered perceptrons, the philosophy is quite general and can be used with other learning machines also. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Chakraborty, D., & Pal, N. R. (2007). Strict generalization in multilayered perceptron networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4529 LNAI, pp. 722–731). Springer Verlag. https://doi.org/10.1007/978-3-540-72950-1_71
Mendeley helps you to discover research relevant for your work.