In this article an autonomous visual perception framework for humanoids is presented. This model-based framework exploits the available knowledge and the context acquired during global localization in order to overcome the limitations of pure data-driven approaches. The reasoning for perception and the properceptive1 components are the key elements to solve complex visual assertion queries with a proficient performance. Experimental evaluation with the humanoid robot ARMAR-IIIa is presented. © 2009 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Gonzalez-Aguirre, D., Wieland, S., Asfour, T., & Dillmann, R. (2009). On environmental model-based visual perception for humanoids. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5856 LNCS, pp. 901–909). https://doi.org/10.1007/978-3-642-10268-4_106
Mendeley helps you to discover research relevant for your work.