Linear-Like Policy Iteration Based Optimal Control for Continuous-Time Nonlinear Systems

4Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a novel strategy to construct optimal controllers for continuous-time nonlinear systems by means of linear-like techniques, provided that the optimal value function is differentiable and quadratic-like. This assumption covers a wide range of cases and holds locally around an equilibrium under mild assumptions. The proposed strategy does not require solving the Hamilton-Jacobi-Bellman equation, i.e., a nonlinear partial differential equation, which is known to be hard or impossible to solve. Instead, the Hamilton-Jacobi-Bellman equation is replaced with an easy-solvable state-dependent Lyapunov matrix equation. We exploit a linear-like factorization of the underlying nonlinear system and a policy-iteration algorithm to yield a linear-like policy-iteration for nonlinear systems. The proposed control strategy solves optimal nonlinear control problems in an asymptotically exact, yet still linear-like manner. We prove optimality of the resulting solution and illustrate the results via four examples.

Cite

CITATION STYLE

APA

Tahirovic, A., & Astolfi, A. (2023). Linear-Like Policy Iteration Based Optimal Control for Continuous-Time Nonlinear Systems. IEEE Transactions on Automatic Control, 68(10), 5837–5849. https://doi.org/10.1109/TAC.2022.3226671

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free