Learning of labeling room space for mobile robots based on visual motor experience

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A model was developed to allow a mobile robot to label the areas of a typical domestic room, using raw sequential visual and motor data, no explicit information on location was provided, and no maps were constructed. The model comprised a deep autoencoder and a recurrent neural network. The model was demonstrated to (1) learn to correctly label areas of different shapes and sizes, (2) be capable of adapting to changes in room shape and rearrangement of items in the room, and (3) attribute different labels to the same area, when approached from different angles. Analysis of the internal representations of the model showed that a topological structure corresponding to the room structure was self-organized as the trajectory of the internal activations of the network.

Cite

CITATION STYLE

APA

Yamada, T., Ito, S., Arie, H., & Ogata, T. (2017). Learning of labeling room space for mobile robots based on visual motor experience. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10613 LNCS, pp. 35–42). Springer Verlag. https://doi.org/10.1007/978-3-319-68600-4_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free