Poster: Boosting Interpretability of Non-Readable Deep Learning Forecasts: The Case of Buildings' Energy Consumptions Prediction

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It is important in the energy management of a building that energy consumption forecasts made by neural networks (referred to as black boxes) are backed up by consistent explanations from the model itself. Although the existing inter-pretable methods provide helpful information, it is not practical enough for energy managers. Expressly, the managers are not provided with an explanation for a certain period in the forecasted time series of energy consumption. We cover this lack of explanation by proposing a novel interpretability use case: explaining the shapelet of a period's forecast based on similar patterns in the past energy consumption profile, which our forecasting model can verify. Another interpretability use case is presented to explain better the electricity consumption forecast: determining the importance of each exogenous variable in the prediction problem. Temporal Fusion Transformers (TFT), a state-of-The-Art, interpretable, and accurate forecasting model is employed to address the interpretability use cases via analyzing the distribution of attention weights. The results of applying the use cases on our dataset are demonstrated.

Cite

CITATION STYLE

APA

Samimi, R., Alyousef, A., Baranzini, D., & De Meer, H. (2022). Poster: Boosting Interpretability of Non-Readable Deep Learning Forecasts: The Case of Buildings’ Energy Consumptions Prediction. In e-Energy 2022 - Proceedings of the 2022 13th ACM International Conference on Future Energy Systems (pp. 434–435). Association for Computing Machinery, Inc. https://doi.org/10.1145/3538637.3538754

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free