Evaluating the impact of optical interconnects on a multi-chip machine-learning architecture

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Following trends that emphasize neural networks for machine learning, many studies regarding computing systems have focused on accelerating deep neural networks. These studies often propose utilizing the accelerator specialized in a neural network and the cluster architecture composed of interconnected accelerator chips. We observed that inter-accelerator communication within a cluster has a significant impact on the training time of the neural network. In this paper, we show the advantages of optical interconnects for multi-chip machine-learning architecture by demonstrating performance improvements through replacing electrical interconnects with optical ones in an existing multi-chip system. We propose to use highly practical optical interconnect implementation and devise an arithmetic performance model to fairly assess the impact of optical interconnects on a machine-learning accelerator platform. In our evaluation of nine Convolutional Neural Networks with various input sizes, 100 and 400 Gbps optical interconnects reduce the training time by an average of 20.6% and 35.6%, respectively, compared to the baseline system with 25.6 Gbps electrical ones.

Cite

CITATION STYLE

APA

Ro, Y., Lee, E., & Ahn, J. H. (2018). Evaluating the impact of optical interconnects on a multi-chip machine-learning architecture. Electronics (Switzerland), 7(8). https://doi.org/10.3390/electronics7080130

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free