Effective Job Execution in Hadoop Over Authorized Deduplicated Data

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing Hadoop treats every job as an independent job and destroys metadata of preceding jobs. As every job is independent, again and again it has to read data from all Data Nodes. Moreover relationships between specific jobs are also not getting checked. Lack of Specific user identities creation and forming groups, managing user credentials are the weaknesses of HDFS. Due to which overall performance of Hadoop becomes very poor. So there is a need to improve the Hadoop performance by reusing metadata, better space management, better task execution by checking deduplication and securing data with access rights specification. In our proposed system, task deduplication technique is used. It checks the similarity between jobs by checking block ids. Job metadata and data locality details are stored on Name Node which results in better execution of job. Metadata of executed jobs is preserved. Thus by preserving job metadata re computations time can be saved. Experimental results show that there is an improvement in job execution time, reduced storage space. Thus, improves Hadoop performance.

Cite

CITATION STYLE

APA

Sachin Arun, T., Subrahmanyam, K., & Bagwan, A. B. (2020). Effective Job Execution in Hadoop Over Authorized Deduplicated Data. Webology, 17(2), 430–444. https://doi.org/10.14704/WEB/V17I2/WEB17043

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free