Policy iteration for Hamilton–Jacobi–Bellman equations with control constraints

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Policy iteration is a widely used technique to solve the Hamilton Jacobi Bellman (HJB) equation, which arises from nonlinear optimal feedback control theory. Its convergence analysis has attracted much attention in the unconstrained case. Here we analyze the case with control constraints both for the HJB equations which arise in deterministic and in stochastic control cases. The linear equations in each iteration step are solved by an implicit upwind scheme. Numerical examples are conducted to solve the HJB equation with control constraints and comparisons are shown with the unconstrained cases.

Cite

CITATION STYLE

APA

Kundu, S., & Kunisch, K. (2024). Policy iteration for Hamilton–Jacobi–Bellman equations with control constraints. Computational Optimization and Applications, 87(3), 785–809. https://doi.org/10.1007/s10589-021-00278-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free