A core question in multi-agent systems is understanding the motivations for an agent's actions based on their behavior. Inverse reinforcement learning provides a framework for extracting utility functions from observed agent behavior, casting the problem as finding domain parameters which induce such a behavior from rational decision makers. We show how to efficiently and scalably extend inverse reinforcement learning to multi-agent settings, by reducing the multi-agent problem to N single-agent problems while still satisfying rationality conditions such as strong rationality. However, we observe that rewards learned naively tend to lack insightful structure, which causes them to produce undesirable behavior when optimized in games with different players from those encountered during training. We further investigate conditions under which rewards or utility functions can be precisely identified, on problem domains such as normal-form and Markov games, as well as auctions, where we show we can learn reward functions that properly generalize to new settings.
CITATION STYLE
Fu, J., Tacchetti, A., Perolat, J., & Bachrach, Y. (2021). Evaluating strategic structures in multi-agent inverse reinforcement learning. Journal of Artificial Intelligence Research, 71(4), 925–951. https://doi.org/10.1613/JAIR.1.12594
Mendeley helps you to discover research relevant for your work.