Deep Q-learning based resource allocation in industrial wireless networks for URLLC

15Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Ultra-reliable low-latency communication (URLLC) is one of the promising services offered by fifth-generationtechnology for an industrial wireless network. Moreover, reinforcement learning is gaining attention due to its potential to learnfrom observed as well as unobserved results. Industrial wireless nodes (IWNs) may vary dynamically due to inner or externalvariables and thus require a depreciation of the dispensable redesign of the network resource allocation. Traditional methodsare explicitly programmed, making it difficult for networks to dynamically react. To overcome such a scenario, deep Q-learning(DQL)-based resource allocation strategies as per the learning of the experienced trade-offs' and interdependencies in IWN isproposed. The proposed findings indicate that the algorithm can find the best performing measures to improve the allocation ofresources. Moreover, DQL further reinforces to achieve better control to have ultra-reliable and low-latency IWN. Extensivesimulations show that the suggested technique leads to the distribution of URLLC resources in fairness manner. In addition, theauthors also assess the impact on resource allocation by the DQL's inherent learning parameters.

Cite

CITATION STYLE

APA

Bhardwaj, S., Ginanjar, R. R., & Kim, D. S. (2020). Deep Q-learning based resource allocation in industrial wireless networks for URLLC. IET Communications, 14(6), 1022–1027. https://doi.org/10.1049/iet-com.2019.1211

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free