Delay-Optimal Scheduling of VMs in a Queueing Cloud Computing System with Heterogeneous Workloads

23Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper studies virtual machine (VM) scheduling in a queueing cloud computing system with stochastical arrivals of heterogeneous jobs by considering jobs' delay requirements. The delay-optimal VM scheduling in such a cloud computing system is formulated as a multi-resource multi-class problem minimize the average job completion time, which is often NP-hard. To solve such a problem, we first propose a queueing model that buffers the same type of VM jobs in one virtual queue. The queueing model then divides the VM scheduling into two parallel low-complexity algorithms, i.e., intra-queue buffering and inter-queue scheduling. A min-min best fit (MM-BF) policy is used to schedule the jobs in different queues to minimize the remaining system resources, while a shortest-job-first (SJF) policy is used to buffer the job requests in each queue based on their job lengths in an ascending order. To avoid job starvation for the long-duration jobs in SJF-MMBF, we further propose a queue-length-based MaxWeight (QMW) policy based on Lyapunov drift to minimize the queue lengths of VM jobs, which is called SJF-QMW. Simulation results show that, SJF-MMBF and SJF-QMW achieve low delay performance in terms of average job completion time and high throughput performance in terms of job hosting ratio.

Cite

CITATION STYLE

APA

Guo, M., Guan, Q., Chen, W., Ji, F., & Peng, Z. (2022). Delay-Optimal Scheduling of VMs in a Queueing Cloud Computing System with Heterogeneous Workloads. IEEE Transactions on Services Computing, 15(1), 110–123. https://doi.org/10.1109/TSC.2019.2920954

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free