Supporting data-driven I/O on GPUs using GPUfs

4Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Using discrete GPUS for processing very large datasets is challenging, in particular when an algorithm exhibit unpredictable, data-driven access patterns. In this paper we investigate the utility of GPUfs, a library that provides direct access to files from GPU programs, to implement such algorithms. We analyze the system's bottlenecks, and suggest several modifications to the GPUfs design, including new concurrent hash table for the buffer cache and a highly parallel memory allocator. We also show that by implementing the workload in a warp-centric manner we can improve the performance even further. We evaluate our changes by implementing a real image processing application which creates collages from a dataset of 10 Million images. The enhanced GPUfs design improves the application performance by 5:6× on average over the original GPUfs, and outperforms both 12-core parallel CPU which uses the AVX instruction set, and a standard CUDA-based GPU implementation by up to 2:5× and 3× respectively, while significantly enhancing system programmability and simplifying the application design and implementation.

Author supplied keywords

Cite

CITATION STYLE

APA

Shahar, S., & Silberstein, M. (2016). Supporting data-driven I/O on GPUs using GPUfs. In SYSTOR 2016 - Proceedings of the 9th ACM International Systems and Storage Conference. Association for Computing Machinery, Inc. https://doi.org/10.1145/2928275.2928276

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free