Approximate dynamic programming

0Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This chapter contains a brief review of dynamic programming in continuous time and space. In particular, traditional dynamic programming algorithms such as policy iteration, value iteration, and actor-critic methods are presented in the context of continuous-time optimal control. The role of the optimal value function as a Lyapunov function is explained to facilitate online closed-loop optimal control. This chapter also highlights the problems and the limitations of existing techniques, thereby motivating the development in this book. The chapter concludes with some historic remarks and a brief classification of the available dynamic programming techniques.

Cite

CITATION STYLE

APA

Kamalapurkar, R., Walters, P., Rosenfeld, J., & Dixon, W. (2018). Approximate dynamic programming. In Communications and Control Engineering (pp. 17–42). Springer International Publishing. https://doi.org/10.1007/978-3-319-78384-0_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free