Output Resilient Containment Control of Heterogeneous Systems with Active Leaders Using Reinforcement Learning under Attack Inputs

7Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The optimal solution to the distributed output containment control problem of heterogeneous multiple-agent systems (MASs) with unknown active leaders under attack inputs by using data-based off-policy reinforcement learning (RL) is proposed. Assume that the control input of each leader is bounded and non-zero. Moreover, followers are vulnerable to attack signals in real-world application. Firstly, distributed observers are designed such that the state and output of observers fall into the convex hull formed by leaders. Then, the output containment problem is converted into H∞ tracking problem by minimizing value function, Algebraic Riccati equations (AREs) are obtained in solving optimal H∞ tracking problem for each follower, which are computed by a data-based off-policy RL algorithm without using agents' dynamics. At last, the effectiveness of the algorithm is verified by a simulation example.

Cite

CITATION STYLE

APA

Li, Q., Xia, L., & Song, R. (2019). Output Resilient Containment Control of Heterogeneous Systems with Active Leaders Using Reinforcement Learning under Attack Inputs. IEEE Access, 7, 162219–162228. https://doi.org/10.1109/ACCESS.2019.2947558

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free