In recent years, eXtended Reality (XR) applications have been widely employed in various scenarios, e.g., health care, education, manufacturing, etc. Such application are now easily accessible via mobile phones, tablets, or wearable devices. However, such devices normally suffer from constraints in terms of battery capacity and processing power, limiting the range of applications supported or lowering Quality of Experience. One effective way to address these issues is to offload the computation tasks to the edge servers that are deployed at the network edges, e.g., base stations or WiFi access point, etc. This communication fashion, also named as Multi-access Edge Computing (MEC), is proposed to overcome the limitations in terms of long latency due to long propagation distance of traditional cloud computing approach. XR devices, that are limited in computation resources and energy, can then benefit from offloading the computation intensive tasks to MEC servers. However, as XR applications are comprised of multiple tasks with variety of requirements in terms of latency and energy consumption, it is important to make decision whether one task should be offloaded to MEC server or not. This paper proposes a Deep Reinforcement Learning-based offloading scheme for XR devices (DRLXR). The proposed scheme is used to train and derive the close-to optimal offloading decision whereas optimizing a utility function optimization equation that considers both energy consumption and execution delay at XR devices. The simulation results show how our proposed scheme outperforms the other counterparts in terms of total execution latency and energy consumption.
CITATION STYLE
Trinh, B., & Muntean, G. M. (2023). A Deep Reinforcement Learning-Based Offloading Scheme for Multi-Access Edge Computing-Supported eXtended Reality Systems. IEEE Transactions on Vehicular Technology, 72(1), 1254–1264. https://doi.org/10.1109/TVT.2022.3207692
Mendeley helps you to discover research relevant for your work.