Actor-Critic Reinforcement Learning Control of Non-Strict Feedback Nonaffine Dynamic Systems

15Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

The most focuses of the existing actor-critic reinforcement learning control (ARLC) are on dealing with continuous affine systems or discrete nonaffine systems. In this paper, I propose a new ARLC method for continuous nonaffine dynamic systems subject to unknown dynamics and external disturbances. A new input-to-state stable system is developed to establish an augmented dynamic system, from which I further get a strict-feedback affine model that is convenient for control designing based on a model transformation approach. The Nussbaum function is connected with a fuzzy approximation to devise an actor network whose tracking performance is further enhanced via strengthening signals generated by a fuzzy critic network. The stability of the closed-loop control system is guaranteed by the Lyapunov synthesis. Finally, the comparison simulation results are presented to verify the design.

Cite

CITATION STYLE

APA

Bu, X. (2019). Actor-Critic Reinforcement Learning Control of Non-Strict Feedback Nonaffine Dynamic Systems. IEEE Access, 7, 65569–65578. https://doi.org/10.1109/ACCESS.2019.2917141

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free