Q-learning with FCMAC in multi-agent cooperation

7Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In general, Q-learning needs well-defined quantized state spaces and action spaces to obtain an optimal policy for accomplishing a given task. This makes it difficult to be applied to real robot tasks because of poor performance of learned behavior due to the failure of quantization of continuous state and action spaces. In this paper, we proposed a fuzzy-based CMAC method to calculate the contribution of each neighboring state to generate a continuous action value in order to make motion smooth and effective. A momentum term to speed up training has been designed and implemented in a multi-agent system for real robot applications. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Hwang, K. S., Chen, Y. J., & Lin, T. F. (2006). Q-learning with FCMAC in multi-agent cooperation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3971 LNCS, pp. 599–606). Springer Verlag. https://doi.org/10.1007/11759966_89

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free