Distributed Machine Learning using HDFS and Apache Spark for Big Data Challenges

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Hadoop and Apache Spark have become popular frameworks for distributed big data processing. This research aims to configure Hadoop and Spark for conducting training and testing on big data using distributed machine learning methods with MLlib, including linear regression and multi-linear regression. Additionally, an external library, LSTM, is used for experimentation. The experiments utilize three desktop devices to represent a series of tests on single and multi-node networks. Three datasets, namely bitcoin (3,613,767 rows), gold-price (5,585 rows), and housing-price (23,613 rows), are employed as case studies. The distributed computation tests are conducted by allocating uniform core processors on all three devices and measuring execution times, as well as RMSE and MAPE values. The results of the single-node tests using MLlib (both linear and multi-linear regression) with variations of core utilization ranging from 2 to 16 cores, show that the overall dataset performs optimally using 12 cores, with an execution time of 532.328 seconds. However, in the LSTM method, core allocation variations do not yield significant results and require longer program execution times. On the other hand, in the multinode (2) tests, optimal performance is achieved using 8 cores, with an execution time of 924.711 seconds, while in the multi-node (3) tests, the ideal configuration is 6 cores with an execution time of 881.495 seconds. In conclusion, without the involvement of HDFS, distributed MLlib programs cannot be processed, and core allocation depends on the number of nodes used and the size of the dataset.

References Powered by Scopus

Big data machine learning using apache spark MLlib

90Citations
N/AReaders
Get full text

A parallelization model for performance characterization of Spark Big Data jobs on Hadoop clusters

15Citations
N/AReaders
Get full text

D-Simplexed: Adaptive Delaunay Triangulation for Performance Modeling and Prediction on Big Data Analytics

13Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Apache Spark based distributed clustering for big data analytic with application to 3D road network

0Citations
N/AReaders
Get full text

A Distributed Approach for Implementing Multi-Linear Regression Using Gradient Descent: Toward Efficient Cyber Attacks Detection Algorithms

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Indirman, M. D. C., Wiriasto, G. W., & Irfan Akbar, L. A. S. (2023). Distributed Machine Learning using HDFS and Apache Spark for Big Data Challenges. In E3S Web of Conferences (Vol. 465). EDP Sciences. https://doi.org/10.1051/e3sconf/202346502058

Readers' Discipline

Tooltip

Social Sciences 1

100%

Save time finding and organizing research with Mendeley

Sign up for free