Deep Reinforcement Learning-Based Workload Scheduling for Edge Computing

56Citations
Citations of this article
81Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Edge computing is a new paradigm for providing cloud computing capacities at the edge of network near mobile users. It offers an effective solution to help mobile devices with computation-intensive and delay-sensitive tasks. However, the edge of network presents a dynamic environment with large number of devices, high mobility of users, heterogeneous applications and intermittent traffic. In such environment, edge computing often suffers from unbalance resource allocation, which leads to task failure and affects system performance. To tackle this problem, we proposed a deep reinforcement learning(DRL)-based workload scheduling approach with the goal of balancing the workload, reducing the service time and the failed task rate. Meanwhile, We adopt Deep-Q-Network(DQN) algorithms to solve the complexity and high dimension of workload scheduling problem. Simulation results show that our proposed approach achieves the best performance in aspects of service time, virtual machine(VM) utilization, and failed tasks rate compared with other approaches. Our DRL-based approach can provide an efficient solution to the workload scheduling problem in edge computing.

Cite

CITATION STYLE

APA

Zheng, T., Wan, J., Zhang, J., & Jiang, C. (2022). Deep Reinforcement Learning-Based Workload Scheduling for Edge Computing. Journal of Cloud Computing, 11(1). https://doi.org/10.1186/s13677-021-00276-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free