Characterization of a big data storage workload in the cloud

7Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The proliferation of big data processing platforms has led to radically different system designs, such as MapReduce and the newer Spark. Understanding the workloads of such systems facilitates tuning and could foster new designs. However, whereas MapReduce workloads have been characterized extensively, relatively little public knowledge exists about the characteristics of Spark workloads in representative environments. To address this problem, in this work we collect and analyze a 6-month Spark workload from a major provider of big data processing services, Databricks. Our analysis focuses on a number of key features, such as the long-term trends of reads and modifications, the statistical properties of reads, and the popularity of clusters and of file formats. Overall, we present numerous findings that could form the basis of new systems studies and designs. Our quantitative evidence and its analysis suggest the existence of daily and weekly load imbalances, of heavy-tailed and bursty behaviour, of the relative rarity of modifications, and of proliferation of big data specific formats.

Cite

CITATION STYLE

APA

Talluri, S., Abad, C. L., Łuszczak, A., & Iosup, A. (2019). Characterization of a big data storage workload in the cloud. In ICPE 2019 - Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering (pp. 33–44). Association for Computing Machinery, Inc. https://doi.org/10.1145/3297663.3310302

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free