Learning classifier systems from a reinforcement learning perspective

  • Lanzi P
N/ACitations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We analyze learning classifier systems in the light of tabular reinforcement learning. We note that although genetic algorithms are the most distinctive feature of learning classifier systems, it is not clear whether genetic algorithms are important to learning classifiers systems. In fact, there are models which are strongly based on evolutionary computation (e.g., Wilson's XCS) and others which do not exploit evolutionary computation at all (e.g., Stolzmann's ACS). To find some clarifications, we try to develop learning classifier systems “from scratch”, i.e., starting from one of the most known reinforcement learning technique, Q-learning. We first consider thebasics of reinforcement learning: a problem modeled as a Markov decision process and tabular Q-learning. We introduce a formal framework to define a general purpose rule-based representation which we use to implement tabular Q-learning. We formally define generalization within rules and discuss the possible approaches to extend our rule-based Q-learning with generalization capabilities. We suggest that genetic algorithms are probably the most general approach for adding generalization although they might be not the only solution.

Cite

CITATION STYLE

APA

Lanzi, P. L. (2002). Learning classifier systems from a reinforcement learning perspective. Soft Computing - A Fusion of Foundations, Methodologies and Applications, 6(3–4), 162–170. https://doi.org/10.1007/s005000100113

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free