Recurrent neural networks are frequently used in cognitive science community for modeling linguistic structures. More or less intensive training process is usually performed but several works showed that untrained recurrent networks initialized with small weights can be also successfully used for this type of tasks. In this work we demonstrate that the state space organization of untrained recurrent neural network can be significantly improved by choosing appropriate input representations. We experimentally support this notion on several linguistic time series. © 2009 Springer Berlin Heidelberg.
CITATION STYLE
Čerňanský, M., Makula, M., & Beňušková, Ľ. (2009). Improving the state space organization of untrained recurrent networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5506 LNCS, pp. 671–678). https://doi.org/10.1007/978-3-642-02490-0_82
Mendeley helps you to discover research relevant for your work.