Convergent Parallel Algorithms for Big Data Optimization Problems

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When dealing with big data problems it is crucial to design methods able to decompose the original problem into smaller and more manageable pieces. Parallel methods lead to a solution by concurrently working on different pieces that are distributed among available agents, so that exploiting the computational power of multi-core processors and therefore efficiently solving the problem. Beyond gradient-type methods, that can of course be easily parallelized but suffer from practical drawbacks, recently a convergent decomposition framework for the parallel optimization of (possibly non-convex) big data problems was proposed. Such framework is very flexible and includes both fully parallel and fully sequential schemes, as well as virtually all possibilities in between. We illustrate the versatility of this parallel decomposition framework by specializing it to different well-studied big data problems like LASSO, logistic regression and support vector machines training. We give implementation guidelines and numerical results showing that proposed parallel algorithms work very well in practice.

Cite

CITATION STYLE

APA

Sagratella, S. (2016). Convergent Parallel Algorithms for Big Data Optimization Problems. Studies in Big Data, 18, 461–474. https://doi.org/10.1007/978-3-319-30265-2_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free