Interpretation of models induced by artificial neural networks is often a difficult task. In this paper we focus on a relatively novel neural network architecture and learning algorithm, BP-SOM, that offers possibilities to overcome this difficulty. It is shown that networks trained with BP-SOM show interesting regularities, in that hidden-unit activations become restricted to discrete values, and that the som part can be exploited for automatic rule extraction.
CITATION STYLE
Weijters, T., Van Den Bosch, A., & Van Den Herik, J. (1998). Interpretable neural networks with bp-som. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1398, pp. 406–411). Springer Verlag. https://doi.org/10.1007/bfb0026711
Mendeley helps you to discover research relevant for your work.