Machine learning for load balancing in the Linux kernel

23Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The OS load balancing algorithm governs the performance gains provided by a multiprocessor computer system. The Linux's Completely Fair Scheduler (CFS) scheduler tracks process loads by average CPU utilization to balance workload between processor cores. That approach maximizes the utilization of processing time but overlooks the contention for lower-level hardware resources. In servers running compute-intensive workloads, an imbalanced need for limited computing resources hinders execution performance. This paper solves the above problem using a machine learning (ML)-based resource-aware load balancer. We describe (1) low-overhead methods for collecting training data; (2) an ML model based on a multi-layer perceptron model that imitates the CFS load balancer based on the collected training data; and (3) an in-kernel implementation of inference on the model. Our experiments demonstrate that the proposed model has an accuracy of 99% in making migration decisions and while only increasing the latency by 1.9 μs.

Cite

CITATION STYLE

APA

Chen, J., Banerjee, S. S., Kalbarczyk, Z. T., & Iyer, R. K. (2020). Machine learning for load balancing in the Linux kernel. In APSys 2020 - Proceedings of the 2020 ACM SIGOPS Asia-Pacific Workshop on Systems (pp. 67–74). Association for Computing Machinery. https://doi.org/10.1145/3409963.3410492

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free