Deep Reinforcement Learning-Based Deterministic Routing and Scheduling for Mixed-Criticality Flows

11Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Deterministic networking has recently drawn much attention by investigating deterministic flow scheduling. Combined with artificial intelligent (AI) technologies, it can be leveraged as a promising network technology for facilitating automated network configuration in the Industrial Internet of Things (IIoT). However, the stricter requirements of the IIoT have posed significant challenges, that is, deterministic and bounded latency for time-critical applications. This article incorporates deep reinforcement learning (DRL) in cycle specified queuing and forwarding and proposes a DRL-based deterministic flow scheduler (Deep-DFS) to solve the deterministic flow routing and scheduling problem. Novel delay aware network representations, action masking and criticality aware reward function design are proposed to make deep-DFS more scalable and efficient. Simulation experiments are conducted to evaluate the performances of deep-DFS, and the results show that deep-DFS can schedule more flows than the other benchmark methods (heuristic- and AI-based methods).

Cite

CITATION STYLE

APA

Yu, H., Taleb, T., & Zhang, J. (2023). Deep Reinforcement Learning-Based Deterministic Routing and Scheduling for Mixed-Criticality Flows. IEEE Transactions on Industrial Informatics, 19(8), 8806–8816. https://doi.org/10.1109/TII.2022.3222314

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free