Dynamic handoff policy for RAN slicing by exploiting deep reinforcement learning

10Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

It has been widely acknowledged that network slicing is a key architectural technology to accommodate diversified services for the next generation network (5G). By partitioning the underlying network into multiple dedicated logical networks, 5G can support a variety of extreme business service needs. As network slicing is implemented in radio access networks (RAN), user handoff becomes much more complicated than that in traditional mobile networks. As both physical resource constraints of base stations and logical connection constraints of network slices should be considered in handoff decision, an intelligent handoff policy becomes imperative. In this paper, we model the handoff in RAN slicing as a Markov decision process and resort to deep reinforcement learning to pursue long-term performance improvement in terms of user quality of service and network throughput. The effectiveness of our proposed handoff policy is validated via simulation experiments.

Cite

CITATION STYLE

APA

Wu, Y., Zhao, G., Ni, D., & Du, J. (2021). Dynamic handoff policy for RAN slicing by exploiting deep reinforcement learning. Eurasip Journal on Wireless Communications and Networking, 2021(1). https://doi.org/10.1186/s13638-021-01939-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free