One of the central themes in autonomous robot research concerns the question how visual images of body movements by others can be interpreted and related to one's own body movements and to language describing these body movements. The discovery of mirror neurons has shown that there are brain circuits which become active both in the perception and the re-enactment of bodily gestures, although it is so far unclear how these circuits can form, i.e. how neurons become mirror neurons. We report here further progress with our robot experiments in which a group of autonomous robots play language games in order to coordinate their visual, motor and cognitive body image. We have shown that the right kind of semiotic dynamics can lead to the self-organisation of a successful communication system with which robots can ask each other to perform certain actions. The main contribution of this paper is to show that if the robot has the capacity to 'imagine' the behavior of his own body through selfsimulation, he is better able to guess what action corresponds to a visual image produced by another robot and thus guess the meaning of an unknown word. This leads to a significant speed-up in the way individual agents are able to coordinate visual categories, motor behaviors and language. © 2007 alifexi.org.
CITATION STYLE
Steels, L., & Spranger, M. (2008). Can body language shape body image? In Artificial Life XI: Proceedings of the 11th International Conference on the Simulation and Synthesis of Living Systems, ALIFE 2008 (pp. 577–584).
Mendeley helps you to discover research relevant for your work.