Multi-agent task division learning in hide-and-seek games

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper discusses the problem of territory division in Hide-and-Seek games. To obtain an efficient seeking performance for multiple seekers, the seekers should agree on searching their own territories and learn to visit good hiding places first so that the expected time to find the hider is minimized. We propose a learning model using Reinforcement Learning in a hierarchical learning structure. Elemental tasks of planning the path to each hiding place are learnt in the lower layer, and then the composite task of finding the optimal sequence is learnt in the higher layer. The proposed approach is examined on a set of different maps and resulted in convergece to the optimal solution. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Gunady, M. K., Gomaa, W., & Takeuchi, I. (2012). Multi-agent task division learning in hide-and-seek games. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7557 LNAI, pp. 256–265). https://doi.org/10.1007/978-3-642-33185-5_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free