Characterizing reinforcement learning methods through parameterized learning problems

13Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The field of reinforcement learning (RL) has been energized in the past few decades by elegant theoretical results indicating under what conditions, and how quickly, certain algorithms are guaranteed to converge to optimal policies. However, in practical problems, these conditions are seldom met. When we cannot achieve optimality, the performance of RL algorithms must be measured empirically. Consequently, in order to meaningfully differentiate learning methods, it becomes necessary to characterize their performance on different problems, taking into account factors such as state estimation, exploration, function approximation, and constraints on computation and memory. To this end, we propose parameterized learning problems, in which such factors can be controlled systematically and their effects on learning methods characterized through targeted studies. Apart from providing very precise control of the parameters that affect learning, our parameterized learning problems enable benchmarking against optimal behavior; their relatively small sizes facilitate extensive experimentation. Based on a survey of existing RL applications, in this article, we focus our attention on two predominant, "first order" factors: partial observability and function approximation. We design an appropriate parameterized learning problem, through which we compare two qualitatively distinct classes of algorithms: on-line value function-based methods and policy search methods. Empirical comparisons among various methods within each of these classes project Sarsa(λ) and Q-learning(λ) as winners among the former, and CMA-ES as the winner in the latter. Comparing Sarsa(λ) and CMA-ES further on relevant problem instances, our study highlights regions of the problem space favoring their contrasting approaches. Short run-times for our experiments allow for an extensive search procedure that provides additional insights on relationships between method-specific parameters-such as eligibility traces, initial weights, and population sizes-and problem instances. © 2011 The Author(s).

Cite

CITATION STYLE

APA

Kalyanakrishnan, S., & Stone, P. (2011). Characterizing reinforcement learning methods through parameterized learning problems. Machine Learning, 84(1–2), 205–247. https://doi.org/10.1007/s10994-011-5251-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free