Time Estimation and Resource Minimization Scheme for Apache Spark and Hadoop Big Data Systems with Failures

27Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Apache Spark and Hadoop are open source frameworks for big data processing, which have been adopted by many companies. In order to implement a reliable big data system that can satisfy processing target completion times, accurate resource provisioning and job execution time estimations are needed. In this paper, time estimation and resource minimization schemes for Spark and Hadoop systems are presented. The proposed models use the probability of failure in the estimations to more accurately formulate the characteristics of real big data operations. The experimental results show that the proposed Spark adaptive failure-compensation and Hadoop adaptive failure-compensation schemes improve the accuracy of resource provisions by considering failure events, which improves the scheduling success rate of big data processing tasks.

Cite

CITATION STYLE

APA

Lee, J., Kim, B., & Chung, J. M. (2019). Time Estimation and Resource Minimization Scheme for Apache Spark and Hadoop Big Data Systems with Failures. IEEE Access, 7, 9658–9666. https://doi.org/10.1109/ACCESS.2019.2891001

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free