In this paper, we will show that different kinds of interactive behaviors can emerge according to the kind of proprioceptive function available in a given sensori-motor system. We will study three different examples. In the first one, an internal proprioceptive signal is available for the learning of the visuo-motor coordination between an arm and a camera. An imitation behavior can emerge when the robot's eye focuses on the hand of the experimenter instead of its own hand. The imitative behavior results from the error minimization between the visual signal and the proprioceptive signal. In the second example, we will show that similar modifications of the robot's initial dynamics allows to learn some of the space-time properties of more complex behaviors under the form of a sequence of sensori-motor associations. In the third example, a robot head has to recognize the facial expression of the human caregiver. Yet, the robot has no visual feedback of its own facial expression. The human expressive resonance will allow the robot to select the visual features relevant for a particular facial expression. As a result, after few minutes of interactions, the robot can imitates the facial expression of the human partner. We will show that the different proprioceptive signals used in the examples can be seen as bootstrap mechanisms for more complex interactions. Applied as a crude model of the human, we will propose that these mechanisms play an important role in the process of individuation. © 2010 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Lagarde, M., Andry, P., Gaussier, P., Boucenna, S., & Hafemeister, L. (2010). Proprioception and imitation: On the road to agent individuation. In Studies in Computational Intelligence (Vol. 264, pp. 43–63). https://doi.org/10.1007/978-3-642-05181-4_3
Mendeley helps you to discover research relevant for your work.