Parallel boosting with momentum

22Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

We describe a new, simplified, and general analysis of a fusion of Nesterov's accelerated gradient with parallel coordinate descent. The resulting algorithm, which we call BOOM, for boosting with momentum, enjoys the merits of both techniques. Namely, BOOM retains the momentum and convergence properties of the accelerated gradient method while taking into account the curvature of the objective function. We describe a distributed implementation of BOOM which is suitable for massive high dimensional datasets. We show experimentally that BOOM is especially effective in large scale learning problems with rare yet informative features. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Mukherjee, I., Canini, K., Frongillo, R., & Singer, Y. (2013). Parallel boosting with momentum. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8190 LNAI, pp. 17–32). https://doi.org/10.1007/978-3-642-40994-3_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free