It has long been known that photonic communication can alleviate the data movement bottlenecks that plague conventional microelectronic processors. More recently, there has also been interest in its capabilities to implement low precision linear operations, such as matrix multiplications, fast and efficiently. We characterize the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations. First, we investigate the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems. Photonic processors are shown to have advantages in the limit of large processor sizes (>100μm), large vector sizes (N > 500), and low noise precision (≤4 bits). We discuss several proposed tunable photonic MAC systems, and provide a concrete comparison between deep learning and photonic hardware using several empirically-validated device and system models. We show significant potential improvements over digital electronics in energy (>102), speed (>103), and compute density (>102).
CITATION STYLE
Nahmias, M. A., De Lima, T. F., Tait, A. N., Peng, H. T., Shastri, B. J., & Prucnal, P. R. (2020). Photonic Multiply-Accumulate Operations for Neural Networks. IEEE Journal of Selected Topics in Quantum Electronics, 26(1). https://doi.org/10.1109/JSTQE.2019.2941485
Mendeley helps you to discover research relevant for your work.