Fast function approximation with hierarchial neural networks and their application to a reinforcement learning agent

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current function approximators especially neural networks are often limited in several directions: most of the architectures can hardly be extended with more "informational" capacity, often neural networks with high capacity are too costly in calculation time (especially for an implementation on a microcontroller of a real world robot) and functions with high gradients can hardly be learned. The following approach shows, that these limitations can be overcome by using an adaptive hierarchical vector quantizing algorithm. With this algorithm the calculation time of a classification can decrease down to 0(log(n)). where n is the number of implemented prototypes. If a given number of prototypes can not carry the "information" of the function which has to be approximated, the "informational" capacity can be increased by adding prototypes. Proposed in this article the algorithm is tested in a Reinforcement Learning task. © Springer-Verlag Berlin Heidelberg 2001.

Cite

CITATION STYLE

APA

Fischer, J., Breithaupt, R., & Bode, M. (2001). Fast function approximation with hierarchial neural networks and their application to a reinforcement learning agent. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2084 LNCS, pp. 363–369). Springer Verlag. https://doi.org/10.1007/3-540-45720-8_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free