Abstract
Big Data processing systems (e.g., Spark) have a number of resource configuration parameters, such as memory size, CPU allocation, and the number of running nodes. Regular users and even expert administrators struggle to understand the mutual relation between different parameter configurations and the overall performance of the system. In this paper, we address this challenge by proposing a performance prediction framework, called $d$d-Simplexed, to build performance models with varied configurable parameters on Spark. We take inspiration from the field of Computational Geometry to construct a $d$d-dimensional mesh using Delaunay Triangulation over a selected set of features. From this mesh, we predict execution time for various feature configurations. To minimize the time and resources in building a bootstrap model with a large number of configuration values, we propose an adaptive sampling technique to allow us to collect as few training points as required. Our evaluation on a cluster of computers using WordCount, PageRank, Kmeans, and Join workloads in HiBench benchmarking suites shows that we can achieve less than 5 percent error rate for estimation accuracy by sampling less than 1 percent of data.
Author supplied keywords
Cite
CITATION STYLE
Chen, Y., Goetsch, P., Hoque, M. A., Lu, J., & Tarkoma, S. (2022). D-Simplexed: Adaptive Delaunay Triangulation for Performance Modeling and Prediction on Big Data Analytics. IEEE Transactions on Big Data, 8(2), 458–469. https://doi.org/10.1109/TBDATA.2019.2948338
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.