In this chapter we discuss higher-order methods for optimization problems in machine learning applications. We also present underlying theoretical background as well as detailed experimental results for each of these higher order methods and also provide their in-depth comparison with respect to competing methods in the context of real-world datasets. We show that higher-order methods, contrary to popular understanding, can achieve significantly superior results compared to state-of-the-art competing methods in shorter wall-clock times yielding orders of magnitude of relative speedup for typical real-world datasets.
CITATION STYLE
Kylasa, S., Fang, C. H., Roosta, F., & Grama, A. (2020). Parallel Optimization Techniques for Machine Learning. In Modeling and Simulation in Science, Engineering and Technology (pp. 381–417). Birkhauser. https://doi.org/10.1007/978-3-030-43736-7_13
Mendeley helps you to discover research relevant for your work.