Adaptive real-time offloading decision-making for mobile edges: Deep reinforcement learning framework and simulation results

19Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes a novel dynamic offloading decision method which is inspired by deep reinforcement learning (DRL). In order to realize real-time communications in mobile edge computing systems, an efficient task offloading algorithm is required. When the decision of actions (offloading enabled, i.e., computing in clouds or offloading disabled, i.e., computing in local edges) is made by the proposed DRL-based dynamic algorithm in each unit time, it is required to consider real-time/seamless data transmission and energy-efficiency in mobile edge devices. Therefore, our proposed dynamic offloading decision algorithm is designed for the joint optimization of delay and energy-efficient communications based on DRL framework. According to the performance evaluation via data-intensive simulations, this paper verifies that the proposed dynamic algorithm achieves desired performance.

Cite

CITATION STYLE

APA

Park, S., Kwon, D., Kim, J., Lee, Y. K., & Cho, S. (2020). Adaptive real-time offloading decision-making for mobile edges: Deep reinforcement learning framework and simulation results. Applied Sciences (Switzerland), 10(5). https://doi.org/10.3390/app10051663

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free