The goal of this research is to provide a real-time and adaptive spoken langue interface between humans and a humanoid robot. The system should be able to learn new grammatical constructions in real-time, and then use them immediately following or in a later interactive session. In order to achieve this we use a recurrent neural network of 500 neurons - echo state network with leaky neurons [1]. The model processes sentences as grammatical constructions, in which the semantic words (nouns and verbs) are extracted and stored in working memory, and the grammatical words (prepositions, auxiliary verbs, etc.) are inputs to the network. The trained network outputs code the role (predicate, agent, object/location) that each semantic word takes. In the final output, the stored semantic words are then mapped onto their respective roles. The model thus learns the mappings between the grammatical structure of sentences and their meanings.
CITATION STYLE
Hinaut, X., Petit, M., & Dominey, P. F. (2012). Online Language Learning to Perform and Describe Actions for Human-Robot Interaction. In J. Szufnarowska (Ed.), Proceedings of the Post-Graduate Conference on Robotics and Development of Cognition (p. 59). Lausanne, Switzerland. https://doi.org/10.2390/biecoll-robotdoc2012-12
Mendeley helps you to discover research relevant for your work.