Interpretable neural networks with bp-som

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Interpretation of models induced by artificial neural networks is often a difficult task. In this paper we focus on a relatively novel neural network architecture and learning algorithm, BP-SOM, that offers possibilities to overcome this difficulty. It is shown that networks trained with BP-SOM show interesting regularities, in that hidden-unit activations become restricted to discrete values, and that the som part can be exploited for automatic rule extraction.

Cite

CITATION STYLE

APA

Weijters, T., Van Den Bosch, A., & Van Den Herik, J. (1998). Interpretable neural networks with bp-som. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1398, pp. 406–411). Springer Verlag. https://doi.org/10.1007/bfb0026711

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free