Temporal Induced Self-Play for Stochastic Bayesian Games

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One practical requirement in solving dynamic games is to ensure that the players play well from any decision point onward. To satisfy this requirement, existing efforts focus on equilibrium refinement, but the scalability and applicability of existing techniques are limited. In this paper, we propose Temporal-Induced Self-Play (TISP), a novel reinforcement learning-based framework to find strategies with decent performances from any decision point onward. TISP uses belief-space representation, backward induction, policy learning, and non-parametric approximation. Building upon TISP, we design a policy-gradient-based algorithm TISP-PG. We prove that TISP-based algorithms can find approximate Perfect Bayesian Equilibrium in zero-sum one-sided stochastic Bayesian games with finite horizon. We test TISP-based algorithms in various games, including finitely repeated security games and a grid-world game. The results show that TISP-PG is more scalable than existing mathematical programming-based methods and significantly outperforms other learning-based methods.

Cite

CITATION STYLE

APA

Chen, W., Zhou, Z., Wu, Y., & Fang, F. (2021). Temporal Induced Self-Play for Stochastic Bayesian Games. In IJCAI International Joint Conference on Artificial Intelligence (pp. 96–103). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free