On environmental model-based visual perception for humanoids

7Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this article an autonomous visual perception framework for humanoids is presented. This model-based framework exploits the available knowledge and the context acquired during global localization in order to overcome the limitations of pure data-driven approaches. The reasoning for perception and the properceptive1 components are the key elements to solve complex visual assertion queries with a proficient performance. Experimental evaluation with the humanoid robot ARMAR-IIIa is presented. © 2009 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Gonzalez-Aguirre, D., Wieland, S., Asfour, T., & Dillmann, R. (2009). On environmental model-based visual perception for humanoids. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5856 LNCS, pp. 901–909). https://doi.org/10.1007/978-3-642-10268-4_106

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free