Truncated approximate dynamic programming with task-dependent terminal value

10Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

We propose a new class of computationally fast algorithms to find close to optimal policy for Markov Decision Processes (MDP) with large finite horizon T. The main idea is that instead of planning until the time horizon T, we plan only up to a truncated horizon H T and use an estimate of the true optimal value function as the terminal value. Our approach of finding the terminal value function is to learn a mapping from an MDP to its value function by solving many similar MDPs during a training phase and fit a regression estimator.We analyze the method by providing an error propagation theorem that shows the effect of various sources of errors to the quality of the solution.We also empirically validate this approach in a real-world application of designing an energy management system for Hybrid Electric Vehicles with promising results.

Cite

CITATION STYLE

APA

Farahmand, A. M., Nikovski, D. N., Igarashi, Y., & Konaka, H. (2016). Truncated approximate dynamic programming with task-dependent terminal value. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 3123–3129). AAAI press. https://doi.org/10.1609/aaai.v30i1.10397

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free