Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task. When working with massive data, it is desirable to perform stochastic optimization in parallel. Unfortunately, many existing stochastic optimization algorithms cannot be parallelized efficiently. In this paper we show that one can rewrite the regularized risk minimization problem as an equivalent saddle-point problem, and propose an efficient distributed stochastic optimization (DSO) algorithm. We prove the algorithm’s rate of convergence; remarkably, our analysis shows that the algorithm scales almost linearly with the number of processors. We also verify with empirical evaluations that the proposed algorithm is competitive with other parallel, general purpose stochastic and batch optimization algorithms for regularized risk minimization.
CITATION STYLE
Matsushima, S., Yun, H., Zhang, X., & Vishwanathan, S. V. N. (2017). Distributed Stochastic Optimization of Regularized Risk via Saddle-Point Problem. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10534 LNAI, pp. 460–476). Springer Verlag. https://doi.org/10.1007/978-3-319-71249-9_28
Mendeley helps you to discover research relevant for your work.