An accelerated value/policy iteration scheme for optimal control problems and games

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present an accelerated algorithm for the solution of static Hamilton-Jacobi-Bellman equations related to optimal control problems and differential games. The new scheme combines the advantages of value iteration and policy iteration methods by means of an efficient coupling. The method starts with a value iteration phase on a coarse mesh and then switches to a policy iteration procedure over a finer mesh when a fixed error threshold is reached.We present numerical tests assessing the performance of the scheme.

Cite

CITATION STYLE

APA

Alla, A., Falcone, M., & Kalise, D. (2015). An accelerated value/policy iteration scheme for optimal control problems and games. Lecture Notes in Computational Science and Engineering, 103, 489–497. https://doi.org/10.1007/978-3-319-10705-9_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free