Multi-agent Reinforcement Learning for Decentralized Coalition Formation Games

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

We study the application of multi-agent reinforcement learning for game-theoretical problems. In particular, we are interested in coalition formation problems and their variants such as hedonic coalition formation games (also called hedonic games), matching (a common type of hedonic game), and coalition formation for task allocation. We consider decentralized multi-agent systems where autonomous agents inhabit an environment without any prior knowledge of other agents or the system. We also consider spatial formulations of these problems. Most of the literature for coalition formation problems does not consider these formulations of the problems because it increases computational complexity significantly. We propose novel decentralized heuristic learning and multi-agent reinforcement learning (MARL) approaches to train agents, and we use game-theoretic evaluation criteria such as optimality, stability, and indices like Shapley value.

Cite

CITATION STYLE

APA

Taywade, K. (2021). Multi-agent Reinforcement Learning for Decentralized Coalition Formation Games. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 18, pp. 15738–15739). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i18.17866

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free