For nonlinear dynamical systems, the optimal control problem generally requires solving a partial differential equation called the Hamilton–Jacobi–Bellman equation, of which the analytical solution generally cannot be obtained. However, the demand for optimal control keeps increasing, with the goal to save energy, reduce transient time, minimize error accumulation, etc. Consequently, methods were reported to approximately solve the problem leading to the so-called near-optimal control although their technical details differ. This research direction has experienced great progress in recent years but a timely review of them is still missing. This chapter serves as a brief survey for existing methods in this research direction.
CITATION STYLE
Zhang, Y., Li, S., & Zhou, X. (2020). A survey of near-optimal control of nonlinear systems. In Studies in Systems, Decision and Control (Vol. 265, pp. 1–20). Springer International Publishing. https://doi.org/10.1007/978-3-030-33384-3_1
Mendeley helps you to discover research relevant for your work.