Learn your opponent’s strategy (in polynomial time)!

11Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Agents that interact in a distributed environment might increase their utility by behaving optimally given the strategies of the other agents. To do so, agents need to learn about those with whom they share the same world. This paper examines interactions among agents from a game theoretic perspective. In this context, learning has been assumed as a means to reach equilibrium. We analyze the complexity of this learning process. We start with a restricted two-agent model, in which agents are represented by finite automata, and one of the agents plays a fixed strategy. We show that even with this restrictions, the learning process may be exponential in time. We then suggest a criterion of simplicity, that induces a class of automata that are learnable in polynomial time.

Cite

CITATION STYLE

APA

Mor, Y., Goldman, C. V., & Rosenschein, J. S. (1996). Learn your opponent’s strategy (in polynomial time)! In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1042, pp. 165–176). Springer Verlag. https://doi.org/10.1007/3-540-60923-7_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free