Experimental study on chunking algorithms of data deduplication system on large scale data

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Data deduplication also known as data redundancy elimination is a technique for saving storage space. The data deduplication system is highly successful in backup storage environments. Large number of redundancies may exist in a backup storage environment. These redundancies can be eliminated by finding and comparing the fingerprints. This comparison of fingerprints may be done at the file level or splits the files to create chunks and done at the chunk level. The file level deduplication system leads poor results than the chunk level since it considers the entire file for finding hash value and eliminates only duplicate files. This paper focuses on the experimental study on various chunking algorithms since chunking plays a very important role in data redundancy elimination system.

Cite

CITATION STYLE

APA

Nisha, T. R., Abirami, S., & Manohar, E. (2016). Experimental study on chunking algorithms of data deduplication system on large scale data. In Advances in Intelligent Systems and Computing (Vol. 398, pp. 91–98). Springer Verlag. https://doi.org/10.1007/978-81-322-2674-1_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free