Optimal Control for Diffusion Processes

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This chapter deals with completely observable stochastic control problems for diffusion processes, described by SDEs. The decision maker chooses an optimal decision among all possible ones to achieve the goal. Namely, for a control process, its response evolves according to a (controlled) SDE and the payoff on a finite time interval is given. The controller wants to minimize (or maximize) the payoff by choosing an appropriate control process from among all possible ones. Here we consider three types of control processes: 1.(ℱt) -progressively measurable processes.2.Brownian-adapted processes.3.Feedback controls. In order to analyze the problems, we mainly use the dynamic programming principle (DPP) for the value function.The reminder of this chapter is organized as follows. Section 2.1 presents the formulation of control problems and basic properties of value functions, as preliminaries for later sections. Section 2.2 focuses on DPP. Although DPP is known as a two stage optimization method, we will formulate DPP by using a semigroup and characterize the value function via the semigroup. In Sect. 2.3, we deal with verification theorems, which give recipes for finding optimal Markovian policies. Section 2.4 considers a class of Merton-type optimal investment models, as an application of previous results.

Cite

CITATION STYLE

APA

Nisio, M. (2015). Optimal Control for Diffusion Processes. In Probability Theory and Stochastic Modelling (Vol. 72, pp. 31–78). Springer Nature. https://doi.org/10.1007/978-4-431-55123-2_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free