Parallel Optimization Techniques for Machine Learning

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter we discuss higher-order methods for optimization problems in machine learning applications. We also present underlying theoretical background as well as detailed experimental results for each of these higher order methods and also provide their in-depth comparison with respect to competing methods in the context of real-world datasets. We show that higher-order methods, contrary to popular understanding, can achieve significantly superior results compared to state-of-the-art competing methods in shorter wall-clock times yielding orders of magnitude of relative speedup for typical real-world datasets.

Cite

CITATION STYLE

APA

Kylasa, S., Fang, C. H., Roosta, F., & Grama, A. (2020). Parallel Optimization Techniques for Machine Learning. In Modeling and Simulation in Science, Engineering and Technology (pp. 381–417). Birkhauser. https://doi.org/10.1007/978-3-030-43736-7_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free