SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Adiabatic Quantum-Flux-Parametron (AQFP) is a superconducting logic with extremely high energy efficiency. By employing the distinct polarity of current to denote logic '0' and '1', AQFP devices serve as excellent carriers for binary neural network (BNN) computations. Although recent research has made initial strides toward developing an AQFP-based BNN accelerator, several critical challenges remain, preventing the design from being a comprehensive solution. In this paper, we propose SupeRBNN, an AQFP-based randomized BNN acceleration framework that leverages software-hardware co-optimization to eventually make the AQFP devices a feasible solution for BNN acceleration. Specifically, we investigate the randomized behavior of the AQFP devices and analyze the impact of crossbar size on current attenuation, subsequently formulating the current amplitude into the values suitable for use in BNN computation. To tackle the accumulation problem and improve overall hardware performance, we propose a stochastic computing-based accumulation module and a clocking scheme adjustment-based circuit optimization method. To effectively train the BNN models that are compatible with the distinctive characteristics of AQFP devices, we further propose a novel randomized BNN training solution that utilizes algorithm-hardware co-optimization, enabling simultaneous optimization of hardware configurations. In addition, we propose implementing batch normalization matching and the weight rectified clamp method to further improve the overall performance. We validate our SupeRBNN framework across various datasets and network architectures, comparing it with implementations based on different technologies, including CMOS, ReRAM, and superconducting RSFQ/ERSFQ. Experimental results demonstrate that our design achieves an energy efficiency of approximately 7.8 × 104 times higher than that of the ReRAM-based BNN framework while maintaining a similar level of model accuracy. Furthermore, when compared with superconductor-based counterparts, our framework demonstrates at least two orders of magnitude higher energy efficiency.

References Powered by Scopus

ImageNet: A Large-Scale Hierarchical Image Database

51142Citations
9794Readers
Get full text

XNOR-net: Imagenet classification using binary convolutional neural networks

3450Citations
2312Readers

This article is free to access.

ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars

1643Citations
729Readers
Get full text

Cited by Powered by Scopus

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Li, Z., Yuan, G., Yamauchi, T., Masoud, Z., Xie, Y., Dong, P., … Chen, O. (2023). SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices. In Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2023 (pp. 584–598). Association for Computing Machinery, Inc. https://doi.org/10.1145/3613424.3623771

Readers over time

‘23‘24‘2502468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

100%

Readers' Discipline

Tooltip

Engineering 4

80%

Computer Science 1

20%

Save time finding and organizing research with Mendeley

Sign up for free
0