Reducing Stage Weight Estimation Error of Slow Task Detection in MapReduce Scheduling

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hadoop architecture mainly comprises of Hadoop MapReduce and Hadoop Distributed File System (HDFS), for processing big data sets. Distributed processing has been widely used for handling large scale data sets. In the recent years, the volume of data has been increasing exponentially and the scalability of processes is growing too. This is the reason, Hadoop architecture attracted and has been adopted by many cloud computing enterprises. MapReduce is a programming model, created and utilized effectively by Google for performing computations on its large volume data sets. LATE, SAMR and ESAMR scheduling algorithm were all introduced for improving the speculative re-execution of slow tasks over Hadoop’s default job scheduler. In our work, we propose the replacement of k-means used in ESAMR algorithm for task’s stage weights estimation by multilayered, feedforward, non-linear sigmoid perceptron model of Artificial Neural Network, thus improving the efficiency of ESAMR algorithm.

Cite

CITATION STYLE

APA

Upadhyay, U., & Sikka, G. (2018). Reducing Stage Weight Estimation Error of Slow Task Detection in MapReduce Scheduling. In Advances in Intelligent Systems and Computing (Vol. 723, pp. 284–291). Springer Verlag. https://doi.org/10.1007/978-3-319-74690-6_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free