Quantifying the latency benefits of near-edge and in-network FPGA acceleration

10Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Transmitting data to cloud datacenters in distributed IoT applications introduces significant communication latency, but is often the only feasible solution when source nodes are computationally limited. To address latency concerns, Cloudlets, in-network computing, and more capable edge nodes are all being explored as a way of moving processing capability towards the edge of the network. Hardware acceleration using Field programmable gate arrays (FPGAs) is also seeing increased interest due to reduced computation time and improved efficiency. This paper evaluates the the implications of these offloading approaches using a case study neural network based image classification application, quantifying both the computation and communication latency resulting from different platform choices. We demonstrate that emerging in-network accelerator approaches offer much improved and predictable performance as well as better scaling to support multiple data sources.

Author supplied keywords

Cite

CITATION STYLE

APA

Cooke, R. A., & Fahmy, S. A. (2020). Quantifying the latency benefits of near-edge and in-network FPGA acceleration. In EdgeSys 2020 - Proceedings of the 3rd ACM International Workshop on Edge Systems, Analytics and Networking, Part of EuroSys 2020 (pp. 7–12). Association for Computing Machinery. https://doi.org/10.1145/3378679.3394534

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free