Computational sprinting speeds up query execution by increasing power usage for short bursts. Sprinting policy decides when and how long to sprint. Poor policies inflate response time significantly. We propose a model-driven approach that chooses between sprinting policies based on their expected response time. However, sprinting alters query executions at runtime, creating a complex dependency between queuing and processing time. Our performance modeling approach employs offline profiling, machine learning, and first-principles simulation. Collectively, these modeling techniques capture the effects of sprinting on response time. We validated our modeling approach with 3 sprinting mechanisms across 9 workloads. Our performance modeling approach predicted response time with median error below 4% in most tests and median error of 11% in the worst case. We demonstrated model-driven sprinting for cloud providers seeking to colocate multiple workloads on AWS Burstable Instances while meeting service level objectives. Model-driven sprinting uncovered policies that achieved response time goals, allowing more workloads to colocate on a node. Compared to AWS Burstable policies, our approach increased revenue per node by 1.6X.
CITATION STYLE
Morris, N., Stewart, C., Chen, L., Birke, R., & Kelley, J. (2018). Model-Driven Computational Sprinting. In Proceedings of the 13th EuroSys Conference, EuroSys 2018 (Vol. 2018-January). Association for Computing Machinery, Inc. https://doi.org/10.1145/3190508.3190543
Mendeley helps you to discover research relevant for your work.