Reinforcement Learning Testbed for Power-Consumption Optimization

46Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Common approaches to control a data-center cooling system rely on approximated system/environment models that are built upon the knowledge of mechanical cooling and electrical and thermal management. These models are difficult to design and often lead to suboptimal or unstable performance. In this paper, we show how deep reinforcement learning techniques can be used to control the cooling system of a simulated data center. In contrast to common control algorithms, those based on reinforcement learning techniques can optimize a system’s performance automatically without the need of explicit model knowledge. Instead, only a reward signal needs to be designed. We evaluated the proposed algorithm on the open source simulation platform EnergyPlus. The experimental results indicate that we can achieve 22% improvement compared to a model-based control algorithm built into the EnergyPlus. To encourage the reproduction of our work as well as future research, we have also publicly released an open-source EnergyPlus wrapper interface (https://github.com/IBM/rl-testbed-for-energyplus ) directly compatible with existing reinforcement learning frameworks.

Cite

CITATION STYLE

APA

Moriyama, T., De Magistris, G., Tatsubori, M., Pham, T. H., Munawar, A., & Tachibana, R. (2018). Reinforcement Learning Testbed for Power-Consumption Optimization. In Communications in Computer and Information Science (Vol. 946, pp. 45–59). Springer Verlag. https://doi.org/10.1007/978-981-13-2853-4_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free