A Neural Network-Based Policy Iteration Algorithm with Global H2 -Superlinear Convergence for Stochastic Games on Domains

21Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work, we propose a class of numerical schemes for solving semilinear Hamilton–Jacobi–Bellman–Isaacs (HJBI) boundary value problems which arise naturally from exit time problems of diffusion processes with controlled drift. We exploit policy iteration to reduce the semilinear problem into a sequence of linear Dirichlet problems, which are subsequently approximated by a multilayer feedforward neural network ansatz. We establish that the numerical solutions converge globally in the H2-norm and further demonstrate that this convergence is superlinear, by interpreting the algorithm as an inexact Newton iteration for the HJBI equation. Moreover, we construct the optimal feedback controls from the numerical value functions and deduce convergence. The numerical schemes and convergence results are then extended to oblique derivative boundary conditions. Numerical experiments on the stochastic Zermelo navigation problem are presented to illustrate the theoretical results and to demonstrate the effectiveness of the method.

Cite

CITATION STYLE

APA

Ito, K., Reisinger, C., & Zhang, Y. (2021). A Neural Network-Based Policy Iteration Algorithm with Global H2 -Superlinear Convergence for Stochastic Games on Domains. Foundations of Computational Mathematics, 21(2), 331–374. https://doi.org/10.1007/s10208-020-09460-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free