Leader selection problem for stochastically forced consensus networks based on matrix differentiation

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

The leader selection problem refers to determining a predefined number of agents as leaders in order to minimize the mean-square deviation from consensus in stochastically forced networks. The original leader selection problem is formulated as a non-convex optimization problem where matrix variables are involved. By relaxing the constraints, a convex optimization model can be obtained. By introducing a chain rule of matrix differentiation, we can obtain the gradient of the cost function which consists matrix variables. We develop a “revisited projected gradient method” (RPGM) and a “probabilistic projected gradient method” (PPGM) to solve the two formulated convex and non-convex optimization problems, respectively. The convergence property of both methods is established. For convex optimization model, the global optimal solution can be achieved by RPGM, while for the original non-convex optimization model, a suboptimal solution is achieved by PPGM. Simulation results ranging from the synthetic to real-life networks are provided to show the effectiveness of RPGM and PPGM. This works will deepen the understanding of leader selection problems and enable applications in various real-life distributed control problems.

Cite

CITATION STYLE

APA

Gao, L., Zhao, G., Li, G., & Yang, Z. (2017). Leader selection problem for stochastically forced consensus networks based on matrix differentiation. Physica A: Statistical Mechanics and Its Applications, 469, 799–812. https://doi.org/10.1016/j.physa.2016.11.111

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free