H∞ Control for Discrete-Time Multi-Player Systems via Off-Policy Q-Learning

8Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

This paper presents a novel off-policy game Q-learning algorithm to solve H∞ control problem for discrete-time linear multi-player systems with completely unknown system dynamics. The primary contribution of this paper lies in that the Q-learning strategy employed in the proposed algorithm is implemented in an off-policy policy iteration approach other than on-policy learning, since the off-policy learning has some well-known advantages over the on-policy learning. All of players struggle together to minimize their common performance index meanwhile defeating the disturbance that tries to maximize the specific performance index, and finally they reach the Nash equilibrium of game resulting in satisfying disturbance attenuation condition. For finding the solution of the Nash equilibrium, H∞ control problem is first transformed into an optimal control problem. Then an off-policy Q-learning algorithm is put forward in the typical adaptive dynamic programming (ADP) and game architecture, such that control policies of all players can be learned using only measured data. More importantly, the rigorous proof of no bias of solution to the Nash equilibrium by using the proposed off-policy game Q-learning algorithm is presented. Comparative simulation results are provided to verify the effectiveness and demonstrate the advantages of the proposed method.

Cite

CITATION STYLE

APA

Li, J., & Xiao, Z. (2020). H∞ Control for Discrete-Time Multi-Player Systems via Off-Policy Q-Learning. IEEE Access, 8, 28831–28846. https://doi.org/10.1109/ACCESS.2020.2970760

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free