Fast inverse reinforcement learning with interval consistent graph for driving behavior prediction

10Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

Maximum entropy inverse reinforcement learning (MaxEnt IRL) is an effective approach for learning the underlying rewards of demonstrated human behavior, while it is intractable in high-dimensional state space due to the exponential growth of calculation cost. In recent years, a few works on approximating MaxEnt IRL in large state spaces by graphs provide successful results, however, types of state space models are quite limited. In this work, we extend them to more generic large state space models with graphs where time interval consistency of Markov decision processes are guaranteed. We validate our proposed method in the context of driving behavior prediction. Experimental results using actual driving data confirm the superiority of our algorithm in both prediction performance and computational cost over other existing IRL frameworks.

Cite

CITATION STYLE

APA

Shimosaka, M., Sato, J., Takenaka, K., & Hitomi, K. (2017). Fast inverse reinforcement learning with interval consistent graph for driving behavior prediction. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 1532–1538). AAAI press. https://doi.org/10.1609/aaai.v31i1.10762

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free