Reinforcement Learning-Based Near Optimization for Continuous-Time Markov Jump Singularly Perturbed Systems

18Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The design of a suboptimal controller for continuous-time Markov jump singularly perturbed systems with partially unknown dynamics is studied in this brief. With fast and slow decomposition technique, the original Markov jump singularly perturbed systems are decomposed into fast and slow subsystems as a new attempt. On this basis, an offline parallel Kleinman algorithm and an online parallel integral reinforcement learning algorithm are presented to cope with the different subsystems, respectively. Meanwhile, the controllers obtained by the above two algorithms are used to design the suboptimal controllers for original systems. Furthermore, the suboptimality of the proposed controllers is also discussed. Finally, an example of the electric circuit model is shown to illustrate the applicability of the proposed method.

Cite

CITATION STYLE

APA

Wang, J., Peng, C., Park, J. H., Shen, H., & Shi, K. (2023). Reinforcement Learning-Based Near Optimization for Continuous-Time Markov Jump Singularly Perturbed Systems. IEEE Transactions on Circuits and Systems II: Express Briefs, 70(6), 2026–2030. https://doi.org/10.1109/TCSII.2022.3233790

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free