A modified key partitioning for BigData using MapReduce in Hadoop

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In the period of BigData, massive amounts of structured and unstructured data are being created every day by a multitude of everpresent sources. BigData is complicated to work with and needs extremely parallel software executing on a huge number of computers. MapReduce is a current programming model that makes simpler writing distributed applications which manipulate BigData. In order to make MapReduce to work, it has to divide the workload between the computers in the network. As a result, the performance of MapReduce vigorously depends on how consistently it distributes this study load. This can be a challenge, particularly in the arrival of data skew. In MapReduce, workload allocation depends on the algorithm that partitions the data. How consistently the partitioner distributes the data depends on how huge and delegate the sample is and on how healthy the samples are examined by the partitioning method. This study recommends an enhanced partitioning algorithm using modified key partitioning that advances load balancing and memory utilization. This is completed via an enhanced sampling algorithm and partitioner. To estimate the proposed algorithm, its performance was compared against a high-tech partitioning mechanism employed by TeraSort. Experimentations demonstrate that the proposed algorithm is quicker, more memory efficient and more accurate than the existing implementation.

Author supplied keywords

Cite

CITATION STYLE

APA

Ekambaram, G., & Palanisamy, B. (2015). A modified key partitioning for BigData using MapReduce in Hadoop. Journal of Computer Science, 11(3), 490–497. https://doi.org/10.3844/jcssp.2015.490.497

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free