Evolving keepaway soccer players through task decomposition

20Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In some complex control tasks, learning a direct mapping from an agent's sensors to its actuators is very difficult. For such tasks, decomposing the problem into more manageable components can make learning feasible. In this paper, we provide a task decomposition, in the form of a decision tree, for one such task. We investigate two different methods of learning the resulting subtasks. The first approach, layered learning, trains each component sequentially in its own training environment, aggressively constraining the search. The second approach, coevolution, learns all the subtasks simultaneously from the same experiences and puts few restrictions on the learning algorithm. We empirically compare these two training methodologies using neuro-evolution, a machine learning algorithm that evolves neural networks. Our experiments, conducted in the domain of simulated robotic soccer keepaway, indicate that neuro-evolution can learn effective behaviors and that the less constrained coevolutionary approach outperforms the sequential approach. These results provide new evidence of coevolution's utility and suggest that solution spaces should not be over-constrained when supplementing the learning of complex tasks with human knowledge. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Whiteson, S., Kohl, N., Miikkulainen, R., & Stone, P. (2003). Evolving keepaway soccer players through task decomposition. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2723, 356–368. https://doi.org/10.1007/3-540-45105-6_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free