Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT

12Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The exponential device growth in industrial Internet of things (IIoT) has a noticeable impact on the volume of data generated. Edge-cloud computing cooperation has been introduced to the IIoT to lessen the computational load on cloud servers and shorten the processing time for data. General programmable logic controllers (PLCs), which have been playing important roles in industrial control systems, start to gain the ability to process a large amount of industrial data and share the workload of cloud servers. This transforms them into edge-PLCs. However, the continuous influx of multiple types of concurrent production data streams against the limited capacity of built-in memory in PLCs brings a huge challenge. Therefore, the ability to reasonably allocate memory resources in edge-PLCs to ensure data utilization and real-time processing has become one of the core means of improving the efficiency of industrial processes. In this paper, to tackle dynamic changes in arrival data rate over time at each edge-PLC, we propose to optimize memory allocation with Q-learning distributedly. The simulation experiments verify that the method can effectively reduce the data loss probability while improving the system performance.

Cite

CITATION STYLE

APA

Fu, T., Peng, Y., Liu, P., Lao, H., & Wan, S. (2022). Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT. Journal of Cloud Computing, 11(1). https://doi.org/10.1186/s13677-022-00348-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free