Evolutionary learning of goal priorities in a real-time strategy game

0Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

We present a drive-based agent capable of playing the realtime strategy computer game Starcraft. Success at this task requires the ability to engage in autonomous, goal-directed behaviour, as well as techniques to manage the problem of potential goal conflicts. To address this, we show how a caseinjected genetic algorithm can be used to learn goal priority profiles for use in goal management. This is achieved by learning how goals might be re-prioritised under certain operating conditions, and how priority profiles can be used to dynamically guide high-level strategies. Our dynamic system shows greatly improved results over a version equipped with static knowledge, and a version that only partially exploits the space of learned strategies. However, our work raises questions about how a system must know about its own design in order to best exploit its own competences. Copyright © IARIA, 2011.

Cite

CITATION STYLE

APA

Young, J., & Hawes, N. (2011). Evolutionary learning of goal priorities in a real-time strategy game. In ACHI 2011 - 4th International Conference on Advances in Computer-Human Interactions (pp. 1–4).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free