Meanings of language expressions are constructed not only from words grounded in real-world matters, but also from words such as “not” that participate in the construction by working as logical operators. This study proposes a connectionist method for learning and internally representing functions that deal with both of these word groups, and grounding sentences constructed from them in corresponding behaviors just by experiencing raw sequential data of an imposed task. In the experiment, a robot implemented with a recurrent neural network is required to ground imperative positive and negative sentences given as a sequence of words in corresponding goal-oriented behavior. Analysis of the internal representations reveals that the network fulfilled the requirement by extracting XOR problems implicitly included in the target sequences and solving them by learning to represent the logical operations in its nonlinear dynamics in a self-organizing manner.
CITATION STYLE
Yamada, T., Murata, S., Arie, H., & Ogata, T. (2016). Dynamical linking of positive and negative sentences to goal-oriented robot behavior by hierarchical RNN. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9886 LNCS, pp. 339–346). Springer Verlag. https://doi.org/10.1007/978-3-319-44778-0_40
Mendeley helps you to discover research relevant for your work.