Inferring stochastic regular grammars with recurrent neural networks

8Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent work has shown that the extraction of symbolic rules improves the generalization performance of recurrent neural networks trained with complete (positive and negative) samples of regular languages. This paper explores the possibility of inferring the rules of the language when the network is trained instead with stochastic, positive- only data. For this purpose, a recurrent network with two layers is used. If instead of using the network itself, an automaton is extracted from the network after training and the transition probabilities of the extracted automaton are estimated from the sample, the relative entropy with respect to the true distribution is reduced.

Cite

CITATION STYLE

APA

Carrasco, R. C., Forcada, M. L., & Santamaría, L. (1996). Inferring stochastic regular grammars with recurrent neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1147, pp. 275–281). Springer Verlag. https://doi.org/10.1007/BFb0033361

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free