Reinforcement learning-based link adaptation in long delayed underwater acoustic channel

  • Wang J
  • Yuen C
  • Guan Y
  • et al.
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we apply reinforcement learning, a significant area of machine learning, to formulate an optimal self-learning strategy to interact in an unknown and dynamically variable underwater channel. The dynamic and volatile nature of the underwater channel environment makes it impossible to employ pre-knowledge. In order to select the optimal parameters to transfer data packets, by using reinforcement learning, this problem could be resolved, and better throughput could be achieved without any environmental pre-information. The slow sound velocity in an underwater scenario, means that the delay of transmitting packet acknowledgement back to sender from the receiver is material, deteriorating the convergence speed of the reinforcement learning algorithm. As reinforcement learning requires a timely acknowledgement feedback from the receiver, in this paper, we combine a juggling-like ARQ (Automatic Repeat Request) mechanism with reinforcement learning to minimize the long-delayed reward feedback problem. The simulation is accomplished by OPNET.

Cite

CITATION STYLE

APA

Wang, J., Yuen, C., Guan, Y. L., & Ge, F. (2019). Reinforcement learning-based link adaptation in long delayed underwater acoustic channel. MATEC Web of Conferences, 283, 07001. https://doi.org/10.1051/matecconf/201928307001

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free