Storage Service Reliability and Availability Predictions of Hadoop Distributed File System

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hadoop is a de facto standard for Big Data storage and provides a complete arrangement of components for Big Data elaboration. Hadoop Distributed File System (HDFS), the fundamental module of Hadoop, has been evolved to deliver fault-tolerant data storage services in cloud. This work proposes a precise mathematical model of HDFS and estimates its data storage service availability and reliability. In this connection, a stochastic Petri net (SPN)-based dependability modelling strategy is adopted. In addition, a structural decomposition technique has been advocated to address the state space complexity of the said model. The proposed model is useful to measure crucial quality of service parameters, namely storage service reliability and availability for emerging distributed data storage systems in the context of cloud.

Cite

CITATION STYLE

APA

Chattaraj, D., Bhagat, S., & Sarma, M. (2020). Storage Service Reliability and Availability Predictions of Hadoop Distributed File System. In Lecture Notes in Mechanical Engineering (pp. 617–626). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-13-9008-1_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free