Distributed cache strategies for machine learning classification tasks over cluster computing resources

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Scaling machine learning (ML) methods to learn from large datasets requires devising distributed data architectures and algorithms to support their iterative nature where the same data records are processed several times. Data caching becomes key to minimize data transmission through iterations at each node and, thus, contribute to the overall scalability. In this work we propose a two level caching architecture (disk and memory) and benchmark different caching strategies in a distributed machine learning setup over a cluster with no shared RAM memory. Our results strongly favour strategies where (1) datasets are partitioned and preloaded throughout the distributed memory of the cluster nodes and (2) algorithms use data locality information to synchronize computations at each iteration. This supports the convergence towards models where “computing goes to data” as observed in other Big Data contexts, and allows us to align strategies for parallelizing ML algorithms and configure appropriately computing infrastructures.

Cite

CITATION STYLE

APA

Ovalle, J. E. A., Ramos-Pollan, R., & González, F. A. (2014). Distributed cache strategies for machine learning classification tasks over cluster computing resources. In Communications in Computer and Information Science (Vol. 485, pp. 43–53). Springer Verlag. https://doi.org/10.1007/978-3-662-45483-1_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free