Solving reinforcement learning problems in continuous space with function approximation is currently a research hotspot of machine learning. When dealing with the continuous space problems, the classic Q-iteration algorithms based on lookup table or function approximation converge slowly and are difficult to derive a continuous policy. To overcome the above weaknesses, we propose an algorithm named DFR-Sarsa() based on double-layer fuzzy reasoning and prove its convergence. In this algorithm, the first reasoning layer uses fuzzy sets of state to compute continuous actions; the second reasoning layer uses fuzzy sets of action to compute the components of Q-value. Then, these two fuzzy layers are combined to compute the Q-value function of continuous action space. Besides, this algorithm utilizes the membership degrees of activation rules in the two fuzzy reasoning layers to update the eligibility traces. Applying DFR-Sarsa() to the Mountain Car and Cart-pole Balancing problems, experimental results show that the algorithm not only can be used to get a continuous action policy, but also has a better convergence performance. © 2013 Quan Liu et al.
CITATION STYLE
Liu, Q., Mu, X., Huang, W., Fu, Q., & Zhang, Y. (2013). A Sarsa() algorithm based on double-layer fuzzy reasoning. Mathematical Problems in Engineering, 2013. https://doi.org/10.1155/2013/561026
Mendeley helps you to discover research relevant for your work.