Ribbon: Cost-effective and qos-Aware deep learning model inference using a diverse pool of cloud computing instances

12Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning model inference is a key service in many businesses and scientific discovery processes. This paper introduces Ribbon, a novel deep learning inference serving system that meets two competing objectives: quality-of-service (QoS) target and costeffectiveness. The key idea behind Ribbon is to intelligently employ a diverse set of cloud computing instances (heterogeneous instances) to meet the QoS target and maximize cost savings. Ribbon devises a Bayesian Optimization-driven strategy that helps users build the optimal set of heterogeneous instances for their model inference service needs on cloud computing platforms and, Ribbon demonstrates its superiority over existing approaches of inference serving systems using homogeneous instance pools. Ribbon saves up to 16% of the inference service cost for different learning models including emerging deep learning recommender system models and drug-discovery enabling models.

Cite

CITATION STYLE

APA

Li, B., Roy, R. B., Patel, T., Gadepally, V., Gettings, K., & Tiwari, D. (2021). Ribbon: Cost-effective and qos-Aware deep learning model inference using a diverse pool of cloud computing instances. In International Conference for High Performance Computing, Networking, Storage and Analysis, SC. IEEE Computer Society. https://doi.org/10.1145/3458817.3476168

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free