The option-critic architecture

781Citations
Citations of this article
982Readers
Mendeley users who have this article in their library.

Abstract

Temporal abstraction is key to scaling up learning and planning in reinforcement learning. While planning with temporally extended actions is well understood, creating such abstractions autonomously from data has remained challenging. We tackle this problem in the framework of options [Sutton, Precup &Singh, 1999; Precup, 2000]. We derive policy gradient theorems for options and propose a new option-critic architecture capable of learning both the internal policies and the termination conditions of options, in tandem with the policy over options, and without the need to provide any additional rewards or subgoals. Experimental results in both discrete and continuous environments showcase the flexibility and efficiency of the framework.

Cite

CITATION STYLE

APA

Bacon, P. L., Harb, J., & Precup, D. (2017). The option-critic architecture. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 1726–1734). AAAI press. https://doi.org/10.1609/aaai.v31i1.10916

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free