Situated learning of visual robot behaviors

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a new robot learning framework to acquire scenario specific autonomous behaviors by demonstration. We extract visual features from the demonstrated behavior examples in an indoor environment and transfer it onto an underlying set of scenario aware robot behaviors. Demonstrations are performed using an omnidirectional camera as training instances in different indoor scenarios are registered.The features that distinguish the environment are identified and are used to classify the traversing scenarios. Once the scenario is identified, a behavior model trained by means of artificial neural network pertaining to the specific scenario is learned. The generalization ability of the behavior model is evaluated for seen and unseen data. As a comparison, the behaviors attained using a monolithic general purpose model and its generalization ability against the former is evaluated. The experimental results on the mobile robot indicate the acquired behavior is robust and generalizes meaningful actions beyond the specifics presented during training. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Narayanan, K. K., Posada, L. F., Hoffmann, F., & Bertram, T. (2011). Situated learning of visual robot behaviors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7101 LNAI, pp. 172–182). https://doi.org/10.1007/978-3-642-25486-4_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free