Comparison of echo state networks with simple recurrent networks and variable-length Markov models on symbolic sequences

9Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A lot of attention is now being focused on connectionist models known under the name "reservoir computing". The most prominent example of these approaches is a recurrent neural network architecture called an echo state network (ESN). ESNs were successfully applied in more real-valued time series modeling tasks and performed exceptionally well. Also using ESNs for processing symbolic sequences seems to be attractive. In this work we experimentally support the claim that the state space of ESN is organized according to the Markovian architectural bias principles when processing symbolic sequences. We compare performance of ESNs with connectionist models explicitly using Markovian architectural bias property, with variable length Markov models and with recurrent neural networks trained by advanced training algorithms. Moreover we show that the number of reservoir units plays a similar role as the number of contexts in variable length Markov models. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Čerňanský, M., & Tiňo, P. (2007). Comparison of echo state networks with simple recurrent networks and variable-length Markov models on symbolic sequences. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4668 LNCS, pp. 618–627). Springer Verlag. https://doi.org/10.1007/978-3-540-74690-4_63

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free