On the Acceleration of FaaS Using Remote GPU Virtualization

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Serverless computing and, in particular, Function as a Service (FaaS) has introduced novel computational approaches with its highly-elastic capabilities, per-millisecond billing and scale-to-zero capacities, thus being of interest for the computing continuum. Services such as AWS Lambda allow efficient execution of event-driven short-lived bursty applications, even if there are limitations in terms of the amount of memory and the lack of GPU support for accelerated execution. To this aim, this paper analyses the suitability of including GPU support in AWS Lambda through the rCUDA middleware, which provides CUDA applications with remote GPU execution capabilities. A reference architecture for data-driven accelerated processing is introduced, based on elastic queues and event-driven object storage systems to manage resource contention and GPU scheduling. The benefits and limitations are assessed through a use case of sequence alignment. The results indicate that, for certain scenarios, the usage of remote GPUs in AWS Lambda represents a viable approach to reduce the execution time.

Author supplied keywords

Cite

CITATION STYLE

APA

Naranjo Delgado, D. M., Contreras, M., Moltó, G., Risco, S., Blanquer, I., Prades, J., & Silla, F. (2023). On the Acceleration of FaaS Using Remote GPU Virtualization. In ICPE 2023 - Companion of the 2023 ACM/SPEC International Conference on Performance Engineering (pp. 157–164). Association for Computing Machinery, Inc. https://doi.org/10.1145/3578245.3584933

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free