Fully anticipating the overall effect on society is difficult due to the many as-yet-unrecognized factors at disaster sites. There is a need for autonomous disaster relief robots, which can learn from the conditions they encounter and then take independent actions. Reinforcement learning is one way that robots can acquire information about appropriate behavior in new environments. In the present study, we present the results of a disaster relief simulation that included multiple autonomous robots working as a multi-agent system. In order to assist in the use of reinforcement learning for the efficient acquisition of action rules, we divided the task into various sub-tasks. We propose an approach in which cooperative action is obtained by giving each agent a different reward; this encourages the agents to play different roles. We investigated how the various autonomous agents determined the appropriate action rules and examined the influence of providing separate rewards to different agents in the system. We also compared the values of various actions in different learning situations.
CITATION STYLE
Xie, M., Murata, M., & Sato, S. (2017). Acquisition of Cooperative Action by Rescue Agents with Distributed Roles (pp. 483–493). https://doi.org/10.1007/978-3-319-49049-6_35
Mendeley helps you to discover research relevant for your work.