Abstract
Distributed optimization has become an important research topic for dealing with extremely large volume of data available in the Internet companies nowadays. Additional machines make computation less expensive, but inter-machine communication becomes prominent in the optimization process, and efficient optimization methods should reduce the amount of the communication in order to achieve shorter overall running time. In this work, we utilize the advantages of the recently proposed, theoretically fast-convergent common-directions method, but tackle its main drawback of excessive spatial and computational costs to propose a limited-memory algorithm. The result is an efficient, linear-convergent optimization method for parallel/distributed optimization. We further discuss how our method can exploit the problem structure to efficiently train regularized empirical risk minimization (ERM) models. Experimental results show that our method outperforms state-of-the-art distributed optimization methods for ERM problems.
Cite
CITATION STYLE
Lee, C. P., Wang, P. W., Chen, W., & Lin, C. J. (2017). Limited-memory common-directions method for distributed optimization and its application on empirical risk minimization. In Proceedings of the 17th SIAM International Conference on Data Mining, SDM 2017 (pp. 732–740). Society for Industrial and Applied Mathematics Publications. https://doi.org/10.1137/1.9781611974973.82
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.