Hamilton-Jacobi-Bellman equation for a time-optimal control problem in the space of probability measures

7Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper we formulate a time-optimal control problem in the space of probability measures endowed with the Wasserstein metric as a natural generalization of the correspondent classical problem in Rd where the controlled dynamics is given by a differential inclusion. The main motivation is to model situations in which we have only a probabilistic knowledge of the initial state. In particular we prove first a Dynamic Programming Principle and then we give an Hamilton-Jacobi- Bellman equation in the space of probability measures which is solved by a generalization of the minimum time function in a suitable viscosity sense.

Cite

CITATION STYLE

APA

Cavagnari, G., Marigonda, A., & Orlandi, G. (2016). Hamilton-Jacobi-Bellman equation for a time-optimal control problem in the space of probability measures. In IFIP Advances in Information and Communication Technology (Vol. 494, pp. 200–208). Springer New York LLC. https://doi.org/10.1007/978-3-319-55795-3_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free