Towards a spatial model for humanoid social robots

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an approach to endow a humanoid robot with the capability of learning new objects and recognizing them in an unstructured environment. New objects are learnt, whenever an unrecognized one is found within a certain (small) distance from the robot head. Recognized objects are mapped to an ego-centric frame of reference, which together with a simple short-term memory mechanism, makes this mapping persistent. This allows the robot to be aware of their presence even if temporarily out of the field of view, thus providing a primary spatial model of the environment (as far as known objects are concerned). SIFT features are used, not only for recognizing previously learnt objects, but also to allow the robot to estimate their distance (depth perception). The humanoid platform used for the experiments was the iCub humanoid robot. This capability functions together with iCub's low-level attention system: recognized objects enact salience thus attracting the robot attention, by gazing at them, each one in turn. We claim that the presented approach is a contribution towards linking a bottom-up attention system with top-down cognitive information. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Figueira, D., Lopes, M., Ventura, R., & Ruesch, J. (2009). Towards a spatial model for humanoid social robots. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5816 LNAI, pp. 287–298). https://doi.org/10.1007/978-3-642-04686-5_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free