A Future Direction of Machine Learning for Building Energy Management: Interpretable Models

9Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Machine learning (ML) algorithms are now part of everyday life, as many technological devices use these algorithms. The spectrum of uses is wide, but it is evident that ML represents a revolution that may change almost every human activity. However, as for all innovations, it comes with challenges. One of the most critical of these challenges is providing users with an understanding of how models’ output is related to input data. This is called “interpretability”, and it is focused on explaining what feature influences a model’s output. Some algorithms have a simple and easy-to-understand relationship between input and output, while other models are “black boxes” that return an output without giving the user information as to what influenced it. The lack of this knowledge creates a truthfulness issue when the output is inspected by a human, especially when the operator is not a data scientist. The Building and Construction sector is starting to face this innovation, and its scientific community is working to define best practices and models. This work is intended for developing a deep analysis to determine how interpretable ML models could be among the most promising future technologies for the energy management in built environments.

Cite

CITATION STYLE

APA

Gugliermetti, L., Cumo, F., & Agostinelli, S. (2024, February 1). A Future Direction of Machine Learning for Building Energy Management: Interpretable Models. Energies. Multidisciplinary Digital Publishing Institute (MDPI). https://doi.org/10.3390/en17030700

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free