Stochastic Control for Jump Diffusions

  • Shi J
N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Suppose that we have the opportunity to control the solution to a SDE by dynamically selecting the infinitesimal drift and variance in some optimal way, so as to maximize the expected reward that accumulates over some finite horizon [0, t]. In particular, let K be some compact subset of m that corresponds to the set of controls that are avaiable. For each u ∈ K, there is an infinitesimal drift {µ(x, u) : x ∈ d } and infinitesimal variance {σ(x, u) : x ∈ d } that one can use to control the dynamics of the system. If control u is selected in state x at time s (0 ≤ s ≤ t), then reward accumulates at rate r(s, x, u). In addition, there is a reward q(x) for terminating at time t in state x. Suppose that U = {U (s) : 0 ≤ s ≤ t} is a K-valued process that is adapted to {B(s) : s ≥ 0}. We interpret U (s) as the value of the control selected at time s. Then, the state {X(s) : 0 ≤ s ≤ t} evolves according to the stochastic equation dX(s) = µ(X(s), U (s))ds + σ(X(s), U (s))dB(s), (8.1) and the total expected reward accumulated over [0, t] is then given by E t 0 r(s, X(s), U (s))ds + q(X(t)) . (8.2) Now our goal is to determine U * , the maximizer of (8.2) over the class of all adapted policies {U (s) : 0 ≤ s ≤ t}. We have encountered this type of control problem (earlier in the course) in the discrete time setting. So, we proceed by studying the optimal " cost-to-go " function V : [0, t] × d → given by V (s, x) = sup

Cite

CITATION STYLE

APA

Shi, J. (2012). Stochastic Control for Jump Diffusions. In Stochastic Modeling and Control. InTech. https://doi.org/10.5772/45719

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free