The application of reinforcement learning (RL) in process control has garnered increasing research attention. However, much of the current literature is focused on training and deploying a single RL agent. The application of multi-agent reinforcement learning (MARL) has not been fully explored in process control. This work aims to: (i) develop a unique RL agent configuration that is suitable in a MARL control system for multiloop control, (ii) demonstrate the efficacy of MARL systems in controlling multiloop process that even exhibit strong interactions, and (iii) conduct a comparative study of the performance of MARL systems trained with different game-theoretic strategies. First, we propose a design of an RL agent configuration that combines the functionalities of a feedback controller and a decoupler in a control loop. Thereafter, we deploy two such agents to form a MARL system that learns how to control a two-input, two-output system that exhibits strong interactions. After training, the MARL system shows effective control performance on the process. With further simulations, we examine how the MARL control system performs with increasing levels of process interaction and when trained with reward function configurations based on different game-theoretic strategies (i.e., pure cooperation and mixed strategies). The results show that the performance of the MARL system is weakly dependent on the reward function configuration for systems with weak to moderate loop interactions. The MARL system with mixed strategies appears to perform marginally better than MARL under pure cooperation in systems with very strong loop interactions.
CITATION STYLE
Yifei, Y., & Lakshminarayanan, S. (2023). Multi-agent reinforcement learning for process control: Exploring the intersection between fields of reinforcement learning, control theory, and game theory. Canadian Journal of Chemical Engineering, 101(11), 6227–6239. https://doi.org/10.1002/cjce.24878
Mendeley helps you to discover research relevant for your work.