A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators

34Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.

Abstract

The rapidly-changing deep learning landscape presents a unique opportunity for building inference accelerators optimized for specific datacenter-scale workloads. We propose Full-stack Accelerator Search Technique (FAST), a hardware accelerator search framework that defines a broad optimization environment covering key design decisions within the hardware-software stack, including hardware datapath, software scheduling, and compiler passes such as operation fusion and tensor padding. In this paper, we analyze bottlenecks in state-of-The-Art vision and natural language processing (NLP) models, including EfficientNet and BERT, and use FAST to design accelerators capable of addressing these bottlenecks. FAST-generated accelerators optimized for single workloads improve Perf/TDP by 3.7× on average across all benchmarks compared to TPU-v3. A FAST-generated accelerator optimized for serving a suite of workloads improves Perf/TDP by 2.4× on average compared to TPU-v3. Our return on investment analysis shows that FAST-generated accelerators can potentially be practical for moderate-sized datacenter deployments.

Cite

CITATION STYLE

APA

Zhang, D., Huda, S., Songhori, E., Prabhu, K., Le, Q., Goldie, A., & Mirhoseini, A. (2022). A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators. In International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS (pp. 27–42). Association for Computing Machinery. https://doi.org/10.1145/3503222.3507767

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free