Recent work has shown that the extraction of symbolic rules improves the generalization performance of recurrent neural networks trained with complete (positive and negative) samples of regular languages. This paper explores the possibility of inferring the rules of the language when the network is trained instead with stochastic, positive- only data. For this purpose, a recurrent network with two layers is used. If instead of using the network itself, an automaton is extracted from the network after training and the transition probabilities of the extracted automaton are estimated from the sample, the relative entropy with respect to the true distribution is reduced.
CITATION STYLE
Carrasco, R. C., Forcada, M. L., & Santamaría, L. (1996). Inferring stochastic regular grammars with recurrent neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1147, pp. 275–281). Springer Verlag. https://doi.org/10.1007/BFb0033361
Mendeley helps you to discover research relevant for your work.