Data storage optimization in cloud environment

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Data de-duplication is a process which stores a single copy of the data in the storage by eliminating the redundant copies of the data and provides a reference to the existing unique data. On the other hand, cloud storage is growing day by day due to the large volumes of data generated every day. The users make use of cloud to store the large amount of data available with them. Many Internet services such as blogs and social networks which produces huge amount of data may contain a lot of redundancies between them. To efficiently store and manage such kind of data, de-duplication comes into existence. This paper intends to apply data de-duplication framework in the cloud environment and to assess their performance of compressed storage area with respect to two de-duplication strategies such as file level and chunk level. The combination of performing de-duplication along with compression has also improved the compression rate of the storage device. This research achieves efficiency in terms of storage in large. Also it is obvious from the experiments that the performance of the chunk level is better than the file-level data de-duplication.

Author supplied keywords

Cite

CITATION STYLE

APA

Deivamani, M., Vikraman, R., Abirami, S., & Baskaran, R. (2015). Data storage optimization in cloud environment. In Advances in Intelligent Systems and Computing (Vol. 325, pp. 419–429). Springer Verlag. https://doi.org/10.1007/978-81-322-2135-7_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free