Abstract
Inference is the production stage of machine learning workflow in which a trained model is used to infer or predict with real world data. A recommendation system improves customer experience by displaying most relevant items based on historical behavior of a customer. Machine learning models built for recommendation systems are deployed either on-premise or migrated to a cloud for inference in real time or batch. A recommendation system should be cost effective while honoring service level agreements (SLAs). In this work we discuss on-premise implementation of our recommendation system called iPrescribe. We show a methodology to migrate on-premise implementation of recommendation system to a cloud using ML workflow. We also present our study on performance of recommendation system model when deployed on different types of virtual instances.
Author supplied keywords
Cite
CITATION STYLE
Chahal, D., Ojha, R., Choudhury, S. R., & Nambiar, M. (2020). Migrating a recommendation system to cloud using ML workflow. In ICPE 2020 - Companion of the ACM/SPEC International Conference on Performance Engineering (pp. 1–4). Association for Computing Machinery, Inc. https://doi.org/10.1145/3375555.3384423
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.