D-Simplexed: Adaptive Delaunay Triangulation for Performance Modeling and Prediction on Big Data Analytics

15Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Big Data processing systems (e.g., Spark) have a number of resource configuration parameters, such as memory size, CPU allocation, and the number of running nodes. Regular users and even expert administrators struggle to understand the mutual relation between different parameter configurations and the overall performance of the system. In this paper, we address this challenge by proposing a performance prediction framework, called $d$d-Simplexed, to build performance models with varied configurable parameters on Spark. We take inspiration from the field of Computational Geometry to construct a $d$d-dimensional mesh using Delaunay Triangulation over a selected set of features. From this mesh, we predict execution time for various feature configurations. To minimize the time and resources in building a bootstrap model with a large number of configuration values, we propose an adaptive sampling technique to allow us to collect as few training points as required. Our evaluation on a cluster of computers using WordCount, PageRank, Kmeans, and Join workloads in HiBench benchmarking suites shows that we can achieve less than 5 percent error rate for estimation accuracy by sampling less than 1 percent of data.

Cite

CITATION STYLE

APA

Chen, Y., Goetsch, P., Hoque, M. A., Lu, J., & Tarkoma, S. (2022). D-Simplexed: Adaptive Delaunay Triangulation for Performance Modeling and Prediction on Big Data Analytics. IEEE Transactions on Big Data, 8(2), 458–469. https://doi.org/10.1109/TBDATA.2019.2948338

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free