Model-Free Deep Inverse Reinforcement Learning by Logistic Regression

34Citations
Citations of this article
81Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper proposes model-free deep inverse reinforcement learning to find nonlinear reward function structures. We formulate inverse reinforcement learning as a problem of density ratio estimation, and show that the log of the ratio between an optimal state transition and a baseline one is given by a part of reward and the difference of the value functions under the framework of linearly solvable Markov decision processes. The logarithm of density ratio is efficiently calculated by binomial logistic regression, of which the classifier is constructed by the reward and state value function. The classifier tries to discriminate between samples drawn from the optimal state transition probability and those from the baseline one. Then, the estimated state value function is used to initialize the part of the deep neural networks for forward reinforcement learning. The proposed deep forward and inverse reinforcement learning is applied into two benchmark games: Atari 2600 and Reversi. Simulation results show that our method reaches the best performance substantially faster than the standard combination of forward and inverse reinforcement learning as well as behavior cloning.

Cite

CITATION STYLE

APA

Uchibe, E. (2018). Model-Free Deep Inverse Reinforcement Learning by Logistic Regression. Neural Processing Letters, 47(3), 891–905. https://doi.org/10.1007/s11063-017-9702-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free