DxPU: Large-scale Disaggregated GPU Pools in the Datacenter

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

The rapid adoption of AI and convenience offered by cloud services have resulted in the growing demands for GPUs in the cloud. Generally, GPUs are physically attached to host servers as PCIe devices. However, the fixed assembly combination of host servers and GPUs is extremely inefficient in resource utilization, upgrade, and maintenance. Due to these issues, the GPU disaggregation technique has been proposed to decouple GPUs from host servers. It aggregates GPUs into a pool and allocates GPU node(s) according to user demands. However, existing GPU disaggregation systems have flaws in software-hardware compatibility, disaggregation scope, and capacity.In this article, we present a new implementation of datacenter-scale GPU disaggregation, named DxPU. DxPU efficiently solves the above problems and can flexibly allocate as many GPU node(s) as users demand. To understand the performance overhead incurred by DxPU, we build up a performance model for AI specific workloads. With the guidance of modeling results, we develop a prototype system, which has been deployed into the datacenter of a leading cloud provider for a test run. We also conduct detailed experiments to evaluate the performance overhead caused by our system. The results show that the overhead of DxPU is less than 10%, compared with native GPU servers, in most of user scenarios.

Author supplied keywords

Cite

CITATION STYLE

APA

He, B., Zheng, X., Chen, Y., Li, W., Zhou, Y., Long, X., … Zhang, X. (2023). DxPU: Large-scale Disaggregated GPU Pools in the Datacenter. ACM Transactions on Architecture and Code Optimization, 20(4). https://doi.org/10.1145/3617995

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free