Deep Reinforcement Learning in Buildings: Implicit Assumptions and their Impact

1Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

As deep reinforcement learning (DRL) continues to gain interest in the smart building research community, there is a transition from simulation-based evaluations to deploying DRL control strategies in actual buildings. While the efficacy of a solution could depend on a particular implementation, there are common obstacles that developers have to overcome to deliver an effective controller. Additionally, a deployment in a physical building can invalidate some of the assumptions made during the controller development. Assumptions on the sensor placement or on the equipment behavior can quickly come undone. This paper presents some of the significant assumptions made during the development of DRL based controllers that could affect their operations in a physical building. Furthermore, a preliminary evaluation revealed that controllers developed with some of these assumptions can incur twice the expected costs when they are deployed in a building.

Cite

CITATION STYLE

APA

Prakash, A. K., Touzani, S., Kiran, M., Agarwal, S., Pritoni, M., & Granderson, J. (2020). Deep Reinforcement Learning in Buildings: Implicit Assumptions and their Impact. In RLEM 2020 - Proceedings of the 1st International Workshop on Reinforcement Learning for Energy Management in Buildings and Cities (pp. 48–51). Association for Computing Machinery, Inc. https://doi.org/10.1145/3427773.3427868

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free