Abstract
Nowadays, algorithms with fast convergence, small memory footprints, and low per-iteration complexity are particularly favorable for artificial intelligence applications. In this paper, we propose a doubly stochastic algorithm with a novel accelerating multi-momentum technique to solve large scale empirical risk minimization problem for learning tasks. While enjoying a provably superior convergence rate, in each iteration, such algorithm only accesses a mini batch of samples and meanwhile updates a small block of variable coordinates, which substantially reduces the amount of memory reference when both the massive sample size and ultra-high dimensionality are involved. Specifically, to obtain an e-accurate solution, our algorithm requires only O(log(1/e)/√e) overall computation for the general convex case and O((n+ √n?) log(1/e)) for the strongly convex case. Empirical studies on huge scale datasets are conducted to illustrate the efficiency of our method in practice.
Cite
CITATION STYLE
Shen, Z., Qian, H., Mu, T., & Zhang, C. (2017). Accelerated doubly stochastic gradient algorithm for large-scale empirical risk minimization. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 2715–2721). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/378
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.