Sparkmach: A distributed data processing system based on automated machine learning for big data

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work proposes a semi-automated analysis and modeling package for Machine Learning related problems. The library goal is to reduce the steps involved in a traditional data science roadmap. To do so, Sparkmach takes advantage of Machine Learning techniques to build base models for both classification and regression problems. These models include exploratory data analysis, data preprocessing, feature engineering and modeling. The project has its basis in Pymach, a similar library that faces those steps for small and medium-sized datasets (about ten millions of rows and a few columns). Sparkmach central labor is to scale Pymach to overcome big datasets by using Apache Spark distributed computing, a distributed engine for large-scale data processing, that tackle several data science related problems in a cluster environment. Despite the software nature, Sparkmach can be of use for local environments, getting the most benefits from the distributed processing tools.

Cite

CITATION STYLE

APA

Bravo-Rocca, G., Torres-Robatty, P., & Fiestas-Iquira, J. (2019). Sparkmach: A distributed data processing system based on automated machine learning for big data. In Communications in Computer and Information Science (Vol. 898, pp. 121–128). Springer Verlag. https://doi.org/10.1007/978-3-030-11680-4_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free