Unsupervised learning of reflexive and action-based affordances to model adaptive navigational behavior

6Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

Here we introduce a cognitive model capable to model a variety of behavioral domains and apply it to a navigational task. We used place cells as sensory representation, such that the cells' place fields divided the environment into discrete states. The robot learns knowledge of the environment by memorizing the sensory outcome of its motor actions. This is composed of a central process, learning the probability of state-to-state transitions by motor actions and a distal processing routine, learning the extent to which these state-to-state transitions are caused by sensory-driven reflex behavior (obstacle avoidance). Navigational decision making integrates central and distal learned environmental knowledge to select an action that leads to a goal state. Differentiating distal and central processing increases the behavioral accuracy of the selected actions and the ability of behavioral adaptation to a changed environment. We propose that the system can canonically be expanded to model other behaviors, using alternative definitions of states and actions. The emphasis of this paper is to test this general cognitive model on a robot in a real-world environment. © 2010 Weiller, Läer, Engel and König.

Cite

CITATION STYLE

APA

Weiller, D., Läer, L., Engel, A. K., & König, P. (2010). Unsupervised learning of reflexive and action-based affordances to model adaptive navigational behavior. Frontiers in Neurorobotics, 4(MAY). https://doi.org/10.3389/fnbot.2010.00002

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free