The research community has made significant advances towards realizing self-tuning cloud caches; notwithstanding, existing products still require manual expert tuning to maximize performance. Cloud (software) caches are built to swiftly serve requests; thus, avoiding costly functionality additions not directly related to the request-serving control path is critical. We show that serverless computing cloud services can be leveraged to solve the complex optimization problems that arise during self-tuning loops and can be used to optimize cloud caches for free. To illustrate that our approach is feasible and useful, we implement SPREDS (Self-Partitioning REDiS), a modified version of Redis that optimizes memory management in the multi-instance Redis scenario. A cost analysis shows that the serverless computing approach can lead to significant cost savings: The cost of running the controller as a serverless microservice is 0.85% of the cost of the always-on alternative. Through this case study, we make a strong case for implementing the controller of autonomic systems using a serverless computing approach.
CITATION STYLE
Boza, E. F., Andrade, X., Cedeno, J., Murillo, J., Aragon, H., Abad, C. L., & Abad, A. G. (2020). On implementing autonomic systems with a serverless computing approach: The case of self-partitioning cloud caches. Computers, 9(1). https://doi.org/10.3390/computers9010014
Mendeley helps you to discover research relevant for your work.