Inverse Reinforcement Learning with Explicit Policy Estimates

7Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Various methods for solving the inverse reinforcement learning (IRL) problem have been developed independently in machine learning and economics. In particular, the method of Maximum Causal Entropy IRL is based on the perspective of entropy maximization, while related advances in the field of economics instead assume the existence of unobserved action shocks to explain expert behavior (Nested Fixed Point Algorithm, Conditional Choice Probability method, Nested Pseudo-Likelihood Algorithm). In this work, we make previously unknown connections between these related methods from both fields. We achieve this by showing that they all belong to a class of optimization problems, characterized by a common form of the objective, the associated policy and the objective gradient. We demonstrate key computational and algorithmic differences which arise between the methods due to an approximation of the optimal soft value function, and describe how this leads to more efficient algorithms. Using insights which emerge from our study of this class of optimization problems, we identify various problem scenarios and investigate each method’s suitability for these problems.

Cite

CITATION STYLE

APA

Sanghvi, N., Usami, S., Sharma, M., Groeger, J., & Kitani, K. (2021). Inverse Reinforcement Learning with Explicit Policy Estimates. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 11A, pp. 9472–9480). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i11.17141

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free