Discrete-Time Nonlinear Stochastic Optimal Control Problem Based on Stochastic Approximation Approach

  • Kek S
  • Sim S
  • Leong W
  • et al.
Citations of this article
Mendeley users who have this article in their library.


In this paper, a computational approach is proposed for solving the discrete-time nonlinear optimal control problem, which is disturbed by a sequence of random noises. Because of the exact solution of such optimal control problem is impossible to be obtained, estimating the state dynamics is currently required. Here, it is assumed that the output can be measured from the real plant process. In our approach, the state mean propagation is applied in order to construct a linear model-based optimal control problem, where the model output is measureable. On this basis, an output error, which takes into account the differences between the real output and the model output, is defined. Then, this output error is minimized by applying the stochastic approximation approach. During the computation procedure, the stochastic gradient is established, so as the optimal solution of the model used can be updated iteratively. Once the convergence is achieved, the iterative solution approximates to the true optimal solution of the original optimal control problem, in spite of model-reality differences. For illustration, an example on a continuous stirred-tank reactor problem is studied, and the result obtained shows the applicability of the approach proposed. Hence, the efficiency of the approach proposed is highly recommended.




Kek, S. L., Sim, S. Y., Leong, W. J., & Teo, K. L. (2018). Discrete-Time Nonlinear Stochastic Optimal Control Problem Based on Stochastic Approximation Approach. Advances in Pure Mathematics, 08(03), 232–244. https://doi.org/10.4236/apm.2018.83012

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free