A survey of near-optimal control of nonlinear systems

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For nonlinear dynamical systems, the optimal control problem generally requires solving a partial differential equation called the Hamilton–Jacobi–Bellman equation, of which the analytical solution generally cannot be obtained. However, the demand for optimal control keeps increasing, with the goal to save energy, reduce transient time, minimize error accumulation, etc. Consequently, methods were reported to approximately solve the problem leading to the so-called near-optimal control although their technical details differ. This research direction has experienced great progress in recent years but a timely review of them is still missing. This chapter serves as a brief survey for existing methods in this research direction.

Cite

CITATION STYLE

APA

Zhang, Y., Li, S., & Zhou, X. (2020). A survey of near-optimal control of nonlinear systems. In Studies in Systems, Decision and Control (Vol. 265, pp. 1–20). Springer International Publishing. https://doi.org/10.1007/978-3-030-33384-3_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free