Deterministic state-constrained optimal control problems without controllability assumptions

23Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In the present paper, we consider nonlinear optimal control problems with constraints on the state of the system. We are interested in the characterization of the value function without any controllability assumption. In the unconstrained case, it is possible to derive a characterization of the value function by means of a Hamilton-Jacobi-Bellman (HJB) equation. This equation expresses the behavior of the value function along the trajectories arriving or starting from any position x. In the constrained case, when no controllability assumption is made, the HJB equation may have several solutions. Our first result aims to give the precise information that should be added to the HJB equation in order to obtain a characterization of the value function. This result is very general and holds even when the dynamics is not continuous and the state constraints set is not smooth. On the other hand we study also some stability results for relaxed or penalized control problems. © EDP Sciences, SMAI, 2010.

Cite

CITATION STYLE

APA

Bokanowski, O., Forcadel, N., & Zidani, H. (2011). Deterministic state-constrained optimal control problems without controllability assumptions. ESAIM - Control, Optimisation and Calculus of Variations, 17(4), 995–1015. https://doi.org/10.1051/cocv/2010030

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free