Virtualization technology enables server consolidation and has given an impetus to low-cost green data centers. However, current hypervisors do not provide adequate support for real-time applications, and this has limited the adoption of virtualization in some domains. Soft real-time applications, such as media-based ones, are impeded by components of virtualization including lowperformance virtualization I/O, increased scheduling latency, and shared-cache contention. The virtual machine scheduler is central to all these issues. The goal in this paper is to adapt the virtual machine scheduler to be more soft-real-time friendly. We improve two aspects of the VMM scheduler - managing scheduling latency as a first-class resource and managing shared caches. We use enterprise IP telephony as an illustrative soft realtime workload and design a scheduler S that incorporates the knowledge of soft real-time applications in all aspects of the scheduler to support responsiveness. For this we first define a laxity value that can be interpreted as the target scheduling latency that the workload desires. The load balancer is also designed to minimize the latency for real-time tasks. For cache management, we take cache-affinity into account for real time tasks and load-balance accordingly to prevent cache thrashing. We measured cache misses and demonstrated that cache management is essential for soft real time tasks. Although our scheduler S employs a different design philosophy, interestingly enough it can be implemented with simple modifications to the Xen hypervisor's credit scheduler. Our experiments demonstrate that the Xen scheduler with our modifications can support soft real-time guests well, without penalizing non-real-time domains. © 2010 ACM.
CITATION STYLE
Lee, M., Krishnakumar, A. S., Krishnan, P., Singh, N., & Yajnik, S. (2010). Supporting soft real-time tasks in the Xen hypervisor. In ACM SIGPLAN Notices (Vol. 45, pp. 97–108). https://doi.org/10.1145/1837854.1736012
Mendeley helps you to discover research relevant for your work.