An optimized method of HDFs for massive small files storage

13Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

The development of the Internet-of-Things (IoT) and the Cyber-Physical System (CPS) has greatly facilitated many aspects of technological applications and development. This may lead to significant data growth, especially for small files. The analysis and processing of a large number of small files has become a crucial part of the development of IoT and CPS. Hadoop Distributed File Systems have become powerful platforms to store a larger amount of big data. However, this method has a number of issues when dealing with small files, such as substantial memory consumption and poor access. In this paper, a Dynamic Queue of Small Files (DQSF) algorithm is proposed to solve these problems. DQSF differentiates small files into different categories using an analytical hierarchal process that examines the performance of small files with different ranges across four indexes and determines the size of the dynamic queue according to the best system performance. Additionally, period classification is applied to preprocess the small files before storage, and the prefetching mechanism of the secondary index is used to process index tables. Experimental results show that this method could effectively reduce memory use and improve the storage efficiency of massive small files, which optimizes system performance.

Cite

CITATION STYLE

APA

Jing, W., Tong, D., Chen, G., Zhao, C., & Zhu, L. (2018). An optimized method of HDFs for massive small files storage. Computer Science and Information Systems, 15(3), 533–548. https://doi.org/10.2298/CSIS171015021J

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free