Scaling machine learning (ML) methods to learn from large datasets requires devising distributed data architectures and algorithms to support their iterative nature where the same data records are processed several times. Data caching becomes key to minimize data transmission through iterations at each node and, thus, contribute to the overall scalability. In this work we propose a two level caching architecture (disk and memory) and benchmark different caching strategies in a distributed machine learning setup over a cluster with no shared RAM memory. Our results strongly favour strategies where (1) datasets are partitioned and preloaded throughout the distributed memory of the cluster nodes and (2) algorithms use data locality information to synchronize computations at each iteration. This supports the convergence towards models where “computing goes to data” as observed in other Big Data contexts, and allows us to align strategies for parallelizing ML algorithms and configure appropriately computing infrastructures.
CITATION STYLE
Ovalle, J. E. A., Ramos-Pollan, R., & González, F. A. (2014). Distributed cache strategies for machine learning classification tasks over cluster computing resources. In Communications in Computer and Information Science (Vol. 485, pp. 43–53). Springer Verlag. https://doi.org/10.1007/978-3-662-45483-1_4
Mendeley helps you to discover research relevant for your work.