Inexpensive storage and more powerful processors have resulted in a proliferation of data that needs to be reliably backed up. Network resource limitations make it increasingly difficult to backup a distributed file system on a nightly or even weekly basis. By using delta compression algorithms, which minimally encode a version of a file using only the bytes that have changed, a backup system can compress the data sent to a server. With the delta backup technique, we can achieve significant savings in network transmission time over previous techniques. Our measurements indicate that file system data may, on average, be compressed to within 10% of its original size with this method and that approximately 45% of all changed files have also been backed up in the previous week. Based on our measurements, we conclude that a small file store on the client that contains copies of previously backed up files can be used to retain versions in order to generate delta files. To reduce the load on the backup server, we implement a modified version storage architecture, version jumping, that allows us to restore delta encoded file versions with at most two accesses to tertiary storage. This minimizes server workload and network transmission time on file restore.
CITATION STYLE
Burns, R. C., & Long, D. D. E. (1997). Efficient distributed backup with delta compression. In Proceedings of the Annual Workshop on I/O in Parallel and Distributed Systems, IOPADS (pp. 26–36). ACM. https://doi.org/10.1145/266220.266223
Mendeley helps you to discover research relevant for your work.