Using temporal neighborhoods to adapt function approximators in reinforcement learning

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To avoid tim curse of dimensionality, function approximators arc used in reinforcement learning to learn value functions for individual states. In order to make better use of computational resources (basis functions) many researchers are investigating ways to adapt the basis functions during the learning process so that they better tit the value-function landscape. Here we introduce temporal neighborhoods as small groups of states that experience frequent intra-group transitions during on-line sampling. We then form basis functions along these temporal neighborhoods. Empirical evidence is provided which demonstrates the effectiveness of this scheme. We discuss a class of RL problems for which this method might be plausible.

Cite

CITATION STYLE

APA

Matthew Kretchmar, R., & Anderson, C. W. (1999). Using temporal neighborhoods to adapt function approximators in reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1606, pp. 488–496). Springer Verlag. https://doi.org/10.1007/BFb0098206

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free