Inferring adaptive goal-directed behavior within recurrent neural networks

17Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper shows that active-inference-based, flexible, adaptive goal-directed behavior can be generated by utilizing temporal gradients in a recurrent neural network (RNN). The RNN learns a dynamical sensorimotor forward model of a partially observable environment. It then uses this model to execute goal-directed policy inference online. The internal neural activities encode the predictive state of the controlled entity. The active inference process projects these activities into the future via the RNN’s recurrences, following a tentative sequence of motor commands. This sequence is adapted by back-projecting error between the forward-projected hypothetical states and the desired goal states onto the motor commands. As an example, we show that a trained RNN model can be used to precisely control a multi-copter-like system. Moreover, we show that the RNN can plan hundreds of time steps ahead, unfolding non-linear imaginary paths around obstacles.

Cite

CITATION STYLE

APA

Otte, S., Schmitt, T., Friston, K., & Butz, M. V. (2017). Inferring adaptive goal-directed behavior within recurrent neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10613 LNCS, pp. 227–235). Springer Verlag. https://doi.org/10.1007/978-3-319-68600-4_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free