Action-driven perception for a humanoid

5Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present active object categorization experiments with a real humanoid robot. For this purpose, the training algorithm of a recurrent neural network with parametric bias has been extended with adaptive learning rates. This modification leads to an increase in training speed. Using this new training algorithm we conducted three experiments aiming at object categorization. While holding different objects in its hand, the robot executes a motor sequence that induces multi-modal sensory changes. During learning, these high-dimensional perceptions are 'engraved' in the network. Simultaneously, low-dimensional PB values emerge unsupervised. The geometrical relation of these PB vectors can then be exploited to infer relations between the original high dimensional time series characterizing different objects. Even sensations belonging to unknown objects can be discriminated from known (learned) ones and kept apart from each other reliably. Additionally, we show that the network tolerates noisy sensory signals very well. © Springer-Verlag Berlin Heidelberg 2013.

Author supplied keywords

Cite

CITATION STYLE

APA

Kleesiek, J., Badde, S., Wermter, S., & Engel, A. K. (2013). Action-driven perception for a humanoid. In Communications in Computer and Information Science (Vol. 358, pp. 83–99). Springer Verlag. https://doi.org/10.1007/978-3-642-36907-0_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free