QFCS: A fuzzy LCS in continuous multi-step environments with continuous vector actions

6Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces the QFCS, a new approach to fuzzy learning classifier systems. QFCS can solve the multistep reinforcement learning problem in continuous environments and with a set of continuous vector actions. Rules in the QFCS are small fuzzy systems. QFCS uses a Q-learning algorithm to learn the mapping between inputs and outputs. This paper presents results that show that QFCS can evolve rules to represent only those parts of the input and action space where the expected values are important for making decisions. Results for the QFCS are compared with those obtained by Q-learning with a high discretization to show that the new approach converges in a way similar to how Q-learning does for one-dimension problems with an optimal solution, and for two dimensions QFCS learns suboptimal solutions while it is difficult for Q-learning to converge due to that high discretization. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ramírez-Ruiz, J., Valenzuela-Rendón, M., & Terashima-Marín, H. (2008). QFCS: A fuzzy LCS in continuous multi-step environments with continuous vector actions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5199 LNCS, pp. 286–295). https://doi.org/10.1007/978-3-540-87700-4_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free