Joint relay and channel selection against mobile and smart jammer: A deep reinforcement learning approach

13Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper investigates the joint relay and channel selection problem using a deep reinforcement learning (DRL) algorithm for cooperative communications in a dynamic jamming environment. The latest types of jammers include the mobile and smart jammer that contains multiple jamming patterns. This new type of jammer poses serious challenges to reliable communications such as huge environment states, tightly coupled joint action selections and real-time decision requirements. To cope with these challenges, a DRL-based relay-assisted cooperative communication scheme is proposed. In this scheme, the joint selection problem is constructed as a Markov decision process (MDP) and a double deep Q network (DDQN) based anti-jamming scheme is proposed to address the unknown and dynamic jamming behaviors. Concretely, a joint decision-making network composed of three sub-networks is designed and the independent learning method of each sub-network is proposed. The simulation results show that the user agent is able to anticipate the jammer behaviors and elude the jamming in advance. Furthermore, compared with the sensing-based algorithm, the Q learning-based algorithm and the existing DRL-based anti-jamming approaches, the proposed algorithm maintains a higher average normalized throughput.

Cite

CITATION STYLE

APA

Yuan, H., Song, F., Chu, X., Li, W., Wang, X., Han, H., & Gong, Y. (2021). Joint relay and channel selection against mobile and smart jammer: A deep reinforcement learning approach. IET Communications, 15(17), 2237–2251. https://doi.org/10.1049/cmu2.12257

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free