Fast Convergence of Regularized Learning in Games

  • Syrgkanis V
  • Agarwal A
  • Luo H
 et al. 
  • 72

    Readers

    Mendeley users who have this article in their library.
  • 6

    Citations

    Citations of this article.

Abstract

We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games. When each player in a game uses an algorithm from our class, their individual regret decays at $O(T^{-3/4})$, while the sum of utilities converges to an approximate optimum at $O(T^{-1})$--an improvement upon the worst case $O(T^{-1/2})$ rates. We show a black-box reduction for any algorithm in the class to achieve $\tilde{O}(T^{-1/2})$ rates against an adversary, while maintaining the faster rates against algorithms in the class. Our results extend those of [Rakhlin and Shridharan 2013] and [Daskalakis et al. 2014], who only analyzed two-player zero-sum games for specific algorithms.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Vasilis Syrgkanis

  • Alekh Agarwal

  • Haipeng Luo

  • Robert E. Schapire

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free