Exploration strategies for homeostatic agents

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper evaluates two new strategies for investigating artificial animals called animats. Animats are homeostatic agents with the objective of keeping their internal variables as close to optimal as possible. Steps towards the optimal are rewarded and steps away punished. Using reinforcement learning for exploration and decision making, the animats can consider predetermined optimal/acceptable levels in light of current levels, giving them greater flexibility for exploration and better survival chances. This paper considers the resulting strategies as evaluated in a range of environments, showing them to outperform common reinforcement learning, where internal variables are not taken into consideration.

Cite

CITATION STYLE

APA

Andersson, P., Strandman, A., & Strannegård, C. (2019). Exploration strategies for homeostatic agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11654 LNAI, pp. 178–187). Springer Verlag. https://doi.org/10.1007/978-3-030-27005-6_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free