GPGPU Virtualization Techniques a Comparative Survey

  • Alyas M
  • Hassan H
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

The Graphic Processing Units (GPU) are being adopted in many High Processing Computing (HPC) facilities because of their massively parallel and extraordinary computing power, which makes it possible to accelerate many general purpose implementations from different domains. A general-purpose GPU (GPGPU) is a GPU that performs computations that were traditionally handled by central processing unit (CPU) to accelerate applications along with handling traditional computations for graphics rendering. However, GPUs have some limitations, such as increased acquisition costs as well as larger space requirements, more powerful energy supplies, and their utilization is usually low for most workloads. That results in the need of GPU virtualization to maximize the use of an acquired GPU to share between the virtual machines for optimal use, to reduce power utilization and minimize the costs. This study comparatively reviews the recent GPU virtualization techniques including API remoting, para, full and hardware based virtualization, targeted for general-purpose accelerations. 1. Introduction & Background. Since the start of 21st century, HPC programmers and researchers have embraced a new computing model combining two architectures: (i) multi-core processors with powerful and general-purpose cores, and (ii) many-core application accelerators. The dominant example of accelerators is GPU, with a large number of processing elements/cores, which can boost performance of HPC applications using higher level parallel processing paradigm [3]. Because of the high computational cost of current compute-intensive implementations, gpus are considered as an efficient mean of accelerating the executions of such application by utilizing the parallel programming paradigm. Present-day gpus are excellent at rendering graphics, and their highly parallel architecture gives them an edge over traditional cpus to be more efficient for a variety of different compute-intensive algorithms [4]. High-end computing units comes with gpus that include very large number of small computing units (cores) supported with a high bandwidth to their private embedded memory [1]. HPC has become a must have technology for most demanding applications in scientific fields (high-energy physics, computer sciences, weather, climate, computational chemistry, medical, bio-informatics and genomics), engineering (computational fluid dynamics, energy and aerospace), crypto, security, economy (market simulations, basket analysis and predictive analysis), creative arts and designs (compute-intensive image processing, very large 3d rending and motion creative) and graphics acceleration [2] Traditionally, the general-purpose computations are performed by central processing unit (CPU) like additions, subtractions, multiplications, divisions, shifts, matrix and other similar operations, but with the growth of GPU programming languages such as compute unified device architecture (CUDA), openacc, [5] opengl and opencl and high computation power of GPU [5] has made it preferred choice of HPC programmers. In GPGPU-accelerated implementations, the performance is usually boosted by dividing the application parts into compute-intensive and the rest, and compute-intensive portion is off-loaded to GPU for parallel execution [1], to carry out this operation, programmers have to define which portion of the application will be executed by CPU and which functions (or kernels) will be executed by the GPGPU [1].

Cite

CITATION STYLE

APA

Alyas, M., & Hassan, H. (2018). GPGPU Virtualization Techniques a Comparative Survey. VAWKUM Transactions on Computer Sciences, 15(3), 99. https://doi.org/10.21015/vtcs.v15i3.521

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free