Hybrid of reinforcement and imitation learning for human-like agents

5Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Reinforcement learning methods achieve performance superior to humans in a wide range of complex tasks and uncertain environments. However, high performance is not the sole metric for practical use such as in a game AI or autonomous driving. A highly efficient agent performs greedily and selfishly, and is thus inconvenient for surrounding users, hence a demand for human-like agents. Imitation learning reproduces the behavior of a human expert and builds a human-like agent. However, its performance is limited to the expert's. In this study, we propose a training scheme to construct a human-like and efficient agent via mixing reinforcement and imitation learning for discrete and continuous action space problems. The proposed hybrid agent achieves a higher performance than a strict imitation learning agent and exhibits more human-like behavior, which is measured via a human sensitivity test.

Cite

CITATION STYLE

APA

Dossa, R. F. J., Lian, X., Nomoto, H., Matsubara, T., & Uehara, K. (2020). Hybrid of reinforcement and imitation learning for human-like agents. IEICE Transactions on Information and Systems, E103D(9), 1960–1970. https://doi.org/10.1587/transinf.2019EDP7298

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free