Function approximators are often used in reinforcement learning tasks with large or continuous state spaces. Artificial neural networks, among them recurrent neural networks are popular function approximators, especially in tasks where some kind of of memory is needed, like in real-world partially observable scenarios. However, convergence guarantees for such methods are rarely available. Here, we propose a method using a class of novel RNNs, the echo state networks. Proof of convergence to a bounded region is provided for k-order Markov decision processes. Runs on POMDPs were performed to test and illustrate the working of the architecture. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Szita, I., Gyenes, V., & Lorincz, A. (2006). Reinforcement learning with echo state networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4131 LNCS-I, pp. 830–839). Springer Verlag. https://doi.org/10.1007/11840817_86
Mendeley helps you to discover research relevant for your work.