Delay-Tolerant online convex optimization: Unified analysis and adaptive-gradient algorithms

35Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

We present a unified, black-box-style method for developing and analyzing online convex optimization (OCO) algorithms for full-information online learning in delayed-feedback environments. Our new, simplified analysis enables us to substantially improve upon previous work and to solve a number of open problems from the literature. Specifically, we develop and analyze asynchronous AdaGrad-style algorithms from the Follow-The-Regularized-Leader (FTRL) and Mirror- Descent family that, unlike previous works, can handle projections and adapt both to the gradients and the delays, without relying on either strong convexity or smoothness of the objective function, or data sparsity. Our unified framework builds on a natural reduction from delayed-feedback to standard (non-delayed) online learning. This reduction, together with recent unification results for OCO algorithms, allows us to analyze the regret of generic FTRL and Mirror-Descent algorithms in the delayed-feedback setting in a unified manner using standard proof techniques. In addition, the reduction is exact and can be used to obtain both upper and lower bounds on the regret in the delayed-feedback setting.

Cite

CITATION STYLE

APA

Joulani, P., Gÿorgy, A., & Szepesvari, C. (2016). Delay-Tolerant online convex optimization: Unified analysis and adaptive-gradient algorithms. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 1744–1750). AAAI press. https://doi.org/10.1609/aaai.v30i1.10320

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free