A general class of no-regret learning algorithms and game-theoretic equilibria

25Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A general class of no-regret learning algorithms, called φ-no-regret learning algorithms is denned, which spans the spectrum from no-internal-regret learning to no-external-regret learning, and beyond. φ describes the set of strategies to which the play of a learning algorithm is compared: a learning algorithm satisfies φ-no-regret iff no regret is experienced for playing as the algorithm prescribes, rather than playing according to any of the transformations of the algorithm's play prescribed by elements of φ. Analogously, a class of game-theoretic equilibria, called φ-equilibria, is denned, and it is shown that the empirical distribution of play of φ-no-regret algorithms converges to the set of φ-equilibria. Perhaps surprisingly, the strongest form of no-regret algorithms in this class are no-internal-regret algorithms. Thus, the tightest game-theoretic solution concept to which φ-no-regret algorithms (provably) converge is correlated equilibrium. In particular, Nash equilibrium is not a necessary outcome of learning via any φ-no-regret learning algorithms.

Cite

CITATION STYLE

APA

Greenwald, A., & Jafari, A. (2003). A general class of no-regret learning algorithms and game-theoretic equilibria. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2777, pp. 2–12). Springer Verlag. https://doi.org/10.1007/978-3-540-45167-9_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free