As we develop non-player characters in complex and dynamic game environments, our ability to predefine task specific rules or rewards, or to define environment-specific motivation signals, becomes more unlikely. In such environments, the development of more believable non-player characters will require computational processes that enable the character to focus attention on a relevant portion of the complex environment and to be curious about the changes in the environment. In previous chapters we have shown how reinforcement learning algorithms can automatically generate behaviours. In this chapter we show how concepts of curiosity, motivation, and attention focus can achieve a curious learning agent. We present an agent approach that characterises information outside the agent as the environment, allowing us to conceptualise and model the environment from the perspective of an agent. The agent approach distinguishes between the learner and the environment. After characterising the environment, we then look inside the agent and the motivation process.
CITATION STYLE
Merrick, K. E., & Maher, M. L. (2009). Curiosity, Motivation and Attention Focus. In Motivated Reinforcement Learning (pp. 91–120). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-89187-1_5
Mendeley helps you to discover research relevant for your work.