LSTM-Autoencoder Deep Learning Technique for PAPR Reduction in Visible Light Communication

16Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Visible light communication (VLC) is a relatively new wireless communication technology that allows for high data rate transfer. Because of its capability to enable high-speed transmission and eliminate inter-symbol interference, orthogonal frequency division multiplexing (OFDM) is widely employed in VLC. Peak to average power ratio (PAPR) is an issue that impacts the effectiveness of OFDM systems, particularly in VLC systems, because the signal is distorted by the nonlinearity of light-emitting diodes (LEDs). The proposed method Long Short Term Memory-Autoencoder (LSTM-AE) uses an autoencoder as well as an LSTM to learn a compact representation of an input, allowing the model to handle variable length input sequences as well as predict or produce variable length output sequences. This study compares the suggested model with various PAPR reduction strategies to demonstrate that it offers a superior improvement in PAPR reduction of the transmitted signal while maintaining BER. Also, this model provides a flexible compromisation between PAPR and BER.

Cite

CITATION STYLE

APA

Mohamed, A., Tag Eldien, A. S., Fouda, M. M., & Saad, R. S. (2022). LSTM-Autoencoder Deep Learning Technique for PAPR Reduction in Visible Light Communication. IEEE Access, 10, 113028–113034. https://doi.org/10.1109/ACCESS.2022.3216574

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free