The development of Mellanox/NVIDIA GPUDirect over InfiniBand - A new model for GPU to GPU communications

56Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The usage and adoption of General Purpose GPUs (GPGPU) in HPC systems is increasing due to the unparalleled performance advantage of the GPUs and the ability to fulfill the ever-increasing demands for floating points operations. While the GPU can offload many of the application parallel computations, the system architecture of a GPU-CPU-InfiniBand server does require the CPU to initiate and manage memory transfers between remote GPUs via the high speed InfiniBand network. In this paper we introduce for the first time a new innovative technology - GPUDirect that enables Tesla GPUs to transfer data via InfiniBand without the involvement of the CPU or buffer copies, hence dramatically reducing the GPU communication time and increasing overall system performance and efficiency. We also explore for the first time the performance benefits of GPUDirect using Amber and LAMMPS applications. © Springer-Verlag 2011.

Author supplied keywords

Cite

CITATION STYLE

APA

Shainer, G., Ayoub, A., Lui, P., Liu, T., Kagan, M., Trott, C. R., … Crozier, P. S. (2011). The development of Mellanox/NVIDIA GPUDirect over InfiniBand - A new model for GPU to GPU communications. In Computer Science - Research and Development (Vol. 26, pp. 267–273). https://doi.org/10.1007/s00450-011-0157-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free