We resolve the min-max complexity of distributed stochastic convex optimization (up to a log factor) in the intermittent communication setting, where M machines work in parallel over the course of R rounds of communication to optimize the objective, and during each round of communication, each machine may sequentially compute K stochastic gradient estimates. We present a novel lower bound with a matching upper bound that establishes an optimal algorithm.
CITATION STYLE
Woodworth, B., Bullins, B., Shamir, O., & Srebro, N. (2021). The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication. In Proceedings of Machine Learning Research (Vol. 134, pp. 4386–4437). ML Research Press.
Mendeley helps you to discover research relevant for your work.