Reducing Training Time of Deep Learning Based Digital Backpropagation by Stacking

7Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A method for reducing the training time of a deep learning based digital backpropagation (DL-DBP) is presented. The method is based on dividing a link into smaller sections. A smaller section is then compensated by the DL-DBP algorithm and the same trained model is then reapplied to the subsequent sections. We show in a 32 GBd 16QAM 2400 km 5-channel wavelength division multiplexing transmission link experiment that the proposed stacked DL-DBPs provides a 0.41 dB gain with respect to linear compensation scheme. This needs to be compared with a 0.56 dB gain achieved by a non-stacked DL-DBPs compensated scheme for the price of a 203% increase in total training time. Furthermore, it is shown that by only training the last section of the stacked DL-DBP, one can increase the compensation performance to 0.48 dB.

Cite

CITATION STYLE

APA

Bitachon, B. I., Eppenberger, M., Baeuerle, B., & Leuthold, J. (2022). Reducing Training Time of Deep Learning Based Digital Backpropagation by Stacking. IEEE Photonics Technology Letters, 34(7), 387–390. https://doi.org/10.1109/LPT.2022.3162157

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free