Recurrent neural networks are universal approximators

165Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks represent a class of functions for the efficient identification and forecasting of dynamical systems. It has been shown that feedforward networks are able to approximate any (Borel-)measurable function on a compact domain [1,2,3], Recurrent neural networks (RNNs) have been developed for a better understanding and analysis of open dynamical systems. Compared to feed-forward networks they have several advantages which have been discussed extensively in several papers and books, e.g. [4]. Still the question often arises if RNNs are able to map every open dynamical system, which would be desirable for a broad spectrum of applications. In this paper we give a proof for the universal approximation ability of RNNs in state space model form. The proof is based on the work of Hornik, Stinchcombe, and White about feedforward neural networks [1]. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Schäfer, A. M., & Zimmermann, H. G. (2006). Recurrent neural networks are universal approximators. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4131 LNCS-I, pp. 632–640). Springer Verlag. https://doi.org/10.1007/11840817_66

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free