An efficient data integration framework in cloud using mapreduce

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In Bigdata applications, providing security to massive data is an important challenge because working with such data requires large scale resources that must be provided by cloud service provider. Here, this paper demonstrates a cloud implementation and technologies using big data and discusses how to protect such data using hashing and how users can be authenticated. In particular, technologies using big data such as the Hadoop project of Apache are discussed, which provides parallelized and distributed data analyzing and processing of petabyte of data, along with a summarized view of monitoring and usage of Hadoop cluster. In this paper, an algorithm called FNV hashing is introduced to provide integrity of the data that has been outsourced to cloud by the user. The data within Hadoop cluster can be accessed and verified using hashing. This approach brings out to enable many new security challenges over the cloud environment using Hadoop distributed file system. The performance of the cluster can be monitored by using ganglia monitoring tool. This paper designs an evaluation cloud model which will provide quantity related results for regularly checking accuracy and cost. From the results of the experiment found out that this model is more accurate, cheaper and can respond in real time.

Cite

CITATION STYLE

APA

Srinivasa Rao, P., Krishna Prasad, M. H. M., & Thammi Reddy, K. (2015). An efficient data integration framework in cloud using mapreduce. In SpringerBriefs in Applied Sciences and Technology (pp. 129–137). Springer Verlag. https://doi.org/10.1007/978-981-287-338-5_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free