Game-theoretic learning in distributed control

16Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In distributed architecture control problems, there is a collection of interconnected decision-making components that seek to realize desirable collective behaviors through local interactions and by processing local information. Applications range from autonomous vehicles to energy to transportation. One approach to control of such distributed architectures is to view the components as players in a game. In this approach, two design considerations are the components' incentives and the rules that dictate how components react to the decisions of other components. In game-theoretic language, the incentives are defined through utility functions, and the reaction rules are online learning dynamics. This chapter presents an overview of this approach, covering basic concepts in game theory, special game classes, measures of distributed efficiency, utility design, and online learning rules, all with the interpretation of using game theory as a prescriptive paradigm for distributed control design.

Cite

CITATION STYLE

APA

Marden, J. R., & Shamma, J. S. (2018). Game-theoretic learning in distributed control. In Handbook of Dynamic Game Theory (pp. 511–546). Springer International Publishing. https://doi.org/10.1007/978-3-319-44374-4_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free