Both self-learning architecture (embedded structure) and explicit/implicit teaching from other agents (environmental design issue) are necessary not only for one behavior learning but more seriously for life-time behavior learning. This paper presents a method for a robot to understand unfamiliar behavior shown by others through the collaboration between behavior acquisition and recognition of observed behavior, where the state value has an important role not simply for behavior acquisition (reinforcement learning) but also for behavior recognition (observation). That is, the state value updates can be accelerated by observation without real trials and errors while the learned values enrich the recognition system since it is based on estimation of the state value of the observed behavior. The validity of the proposed method is shown by applying it to a dynamic environment where two robots play soccer. © 2008 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Takahashi, Y., Tamura, Y., & Asada, M. (2008). Mutual development of behavior acquisition and recognition based on value system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5040 LNAI, pp. 291–300). https://doi.org/10.1007/978-3-540-69134-1_29
Mendeley helps you to discover research relevant for your work.