Acquisition of viewpoint transformation and action mappings via sequence to sequence imitative learning by deep neural networks

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

We propose an imitative learning model that allows a robot to acquire positional relations between the demonstrator and the robot, and to transform observed actions into robotic actions. Providing robots with imitative capabilities allows us to teach novel actions to them without resorting to trial-and-error approaches. Existing methods for imitative robotic learning require mathematical formulations or conversion modules to translate positional relations between demonstrators and robots. The proposed model uses two neural networks, a convolutional autoencoder (CAE) and a multiple timescale recurrent neural network (MTRNN). The CAE is trained to extract visual features from raw images captured by a camera. The MTRNN is trained to integrate sensory-motor information and to predict next states. We implement this model on a robot and conducted sequence to sequence learning that allows the robot to transform demonstrator actions into robot actions. Through training of the proposedmodel, representations of actions, manipulated objects, and positional relations are formed in the hierarchical structure of the MTRNN. After training, we confirm capability for generating unlearned imitative patterns.

Cite

CITATION STYLE

APA

Nakajo, R., Murata, S., Arie, H., & Ogata, T. (2018). Acquisition of viewpoint transformation and action mappings via sequence to sequence imitative learning by deep neural networks. Frontiers in Neurorobotics, 12. https://doi.org/10.3389/fnbot.2018.00046

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free