Architectural bias in recurrent neural networks - Fractal analysis

2Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We have recently shown that when initiated with "small" weights, recurrent neural networks (RNNs) with standard sigmoid-type activation functions are inherently biased towards Markov models, i.e. even prior to any training, RNN dynamics can be readily used to extract finite memory machines [6,8]. Following [2], we refer to this phenomenonas the architectural bias of RNNs. In this paper we further extend our work on the architectural bias in RNNs by performing a rigorous fractal analysis of recurrent activation patterns. We obtain both lower and upper bounds on various types of fractal dimensions, such as box-counting and Hausdorff dimensions. © Springer-Verlag Berlin Heidelberg 2002.

Cite

CITATION STYLE

APA

Tiňo, P., & Hammer, B. (2002). Architectural bias in recurrent neural networks - Fractal analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2415 LNCS, pp. 1359–1364). Springer Verlag. https://doi.org/10.1007/3-540-46084-5_219

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free