Training large-scale optoelectronic neural networks with dual-neuron optical-artificial learning

16Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Optoelectronic neural networks (ONN) are a promising avenue in AI computing due to their potential for parallelization, power efficiency, and speed. Diffractive neural networks, which process information by propagating encoded light through trained optical elements, have garnered interest. However, training large-scale diffractive networks faces challenges due to the computational and memory costs of optical diffraction modeling. Here, we present DANTE, a dual-neuron optical-artificial learning architecture. Optical neurons model the optical diffraction, while artificial neurons approximate the intensive optical-diffraction computations with lightweight functions. DANTE also improves convergence by employing iterative global artificial-learning steps and local optical-learning steps. In simulation experiments, DANTE successfully trains large-scale ONNs with 150 million neurons on ImageNet, previously unattainable, and accelerates training speeds significantly on the CIFAR-10 benchmark compared to single-neuron learning. In physical experiments, we develop a two-layer ONN system based on DANTE, which can effectively extract features to improve the classification of natural images.

Cite

CITATION STYLE

APA

Yuan, X., Wang, Y., Xu, Z., Zhou, T., & Fang, L. (2023). Training large-scale optoelectronic neural networks with dual-neuron optical-artificial learning. Nature Communications, 14(1). https://doi.org/10.1038/s41467-023-42984-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free