Developmental Learning of Cooperative Robot Skills: A Hierarchical Multi-Agent Architecture

  • Karigiannis J
  • Rekatsinas T
  • Tzafestas C
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Research activities targeting new methodologies, architectures and in general frameworks that will improve the design of intelligent robots attract significant attention from the research community. Self-organization problems, intrinsic behaviors as well as effective learning, and skill transfer processes in the context of robotic systems have been significantly investigated by researchers. This chapter presents a new framework of developmental skill learning process by introducing a hierarchical multi-agent architecture. More specifically, the methodology proposed is based on using reinforcement learning (RL) techniques in a fuzzified state-space, leading to a collaborative control scheme among the agents engaged in a continuous space, which enables the multi-agent system to learn, over a period of time, how to perform sequences of continuous actions in a cooperative manner without any prior task model. By organizing the agents in a nested architecture, as proposed in this work, a type of problem-specific recursive knowledge acquisition process is obtained. The agents may correspond in fact to independent degrees of freedom of the system and manage to gain experience over the task that they collaboratively perform by continuously exploring and exploiting their state-to-action mapping space. Two numerical experiments are presented, one related to dexterous manipulation and one simulated experiment concerning cooperative mobile robots. Two distinct problem settings are considered. The first one concerns the case of redundant and dextrous robot manipulation tasks, in the framework of which the problem of autonomously developing control skills is considered. Initially, a simulated redundant, four degrees-of-freedom (DoF) planar kinematic chain is considered, trying to develop the skill of accurately reaching a specified target position. In the same problem setting, a simulated three-finger manipulation example is subsequently presented, where each finger is comprised of 4 DoF performing a quasi-static grasp. For the second problem setting, the same theoretical framework is adapted in the case of two mobile robots performing a collaborative box-pushing task. This task involves two moving robots actively cooperating to jointly push an object on a plane to a specified goal location. In this case, the actuated wheels of the mobile robots are considered as the independent agents that have to build up cooperative skills over time, for the robot to demonstrate intelligent behavior. Our goal in this experimental study is to evaluate both the proposed hierarchical multi-agent architecture and the methodological control framework. Such a hierarchical multi-agent approach is envisioned to be highly scalable for the control of robotic systems that are kinematically more complex, comprising multiple DoF and redundancies in open or closed kinematic chains, particularly dexterous robot manipulators and complex biologically inspired robot locomotion systems.

Cite

CITATION STYLE

APA

Karigiannis, J. N., Rekatsinas, T., & Tzafestas, C. S. (2011). Developmental Learning of Cooperative Robot Skills: A Hierarchical Multi-Agent Architecture. In Perception-Action Cycle (pp. 497–538). Springer New York. https://doi.org/10.1007/978-1-4419-1452-1_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free