Abstracting visual percepts to learn concepts

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To efficiently identify properties from its environment is an essential ability of a mobile robot who needs to interact with humans. Successful approaches to provide robots with such ability are based on ad-hoc perceptual representation provided by AI designers. Instead, our goal is to endow autonomous mobile robots (in our experiments a Pioneer 2DX) with a perceptual system that can efficiently adapt itself to ease the learning task required to anchor symbols. Our approach is in the line of meta-learning algorithms that iteratively change representations so as to discover one that is well fitted for the task. The architecture we propose may be seen as a combination of the two widely used approach in feature selection: the Wrapper-model and the Filter-model. Experiments using the PLIC system to identify the presence of Humans and Fire Extinguishers show the interest of such an approach, which dynamically abstracts a well fitted image description depending on the concept to learn.

Cite

CITATION STYLE

APA

Zucker, J. D., Bredeche, N., & Saitta, L. (2002). Abstracting visual percepts to learn concepts. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2371, pp. 256–273). Springer Verlag. https://doi.org/10.1007/3-540-45622-8_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free