Interactive learning of continuous actions from corrective advice communicated by humans

5Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

An interactive learning framework that allows non-expert humans to shape a policy through corrective advice, using a binary signal in the action domain of the robot/agent, is proposed. One of the most innovative features of COACH (COrrective Advice Communicated by Humans), the proposed framework, is a mechanism for adaptively adjusting the amount of human feedback that a given action receives, taking into consideration past feedback. The performance of COACH is compared with the one of TAMER (Teaching an Agent Manually via Evaluative Reinforcement), ACTAMER (Actor-Critic TAMER), and an autonomous agent trained using SARSA(λ) in two reinforcement learning problems: ball dribbling and Cart-Pole balancing. COACH outperforms the other learning frameworks in the reported experiments. In addition, results show that COACH is able to transfer successfully human knowledge to agents with continuous actions, being a complementary approach to TAMER, which is appropriate for teaching in discrete action domains.

Cite

CITATION STYLE

APA

Celemin, C., & Ruiz-Del-Solar, J. (2015). Interactive learning of continuous actions from corrective advice communicated by humans. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9513, pp. 16–27). Springer Verlag. https://doi.org/10.1007/978-3-319-29339-4_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free