Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods

12Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

With the rapid development of communication technologies, the quality of our daily life has been improved with the applications of smart communications and networking, such as intelligent transportation and mobile service computing. However, high user demands for quality of service (QoS) are forcing intelligent transportation to continuously improve immediacy and reduce the tasks offloading delay for the internet of vehicles (IoV). To meet the low latency of vehicle tasks offloading, an offloading scheme combining mobile edge computing (MEC) and deep reinforcement learning (DRL), is proposed in this article. Firstly, a realistic map is simulated, while initializing the tasks queue and building a tasks offloading environment with multiple service nodes. Then, an algorithm that combines deep learning with reinforcement learning, that is, the deep Q-learning network (DQN) algorithm, is developed to optimize the offloading scheme by reducing the offload latency. Finally, given that the complete information cannot be observed effectively in the environment, a long short-term memory (LSTM) model is applied within the DQN to train its neural network to improve offloading efficiency. The simulation results show that the MEC-based vehicle tasks offloading can effectively reduce the latency of vehicle offloading.

Cite

CITATION STYLE

APA

Wang, T., Luo, X., & Zhao, W. (2022). Improving the performance of tasks offloading for internet of vehicles via deep reinforcement learning methods. IET Communications, 16(10), 1230–1240. https://doi.org/10.1049/cmu2.12334

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free