Local Interpretable Explanations of Energy System Designs

11Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Optimization-based design tools for energy systems often require a large set of parameter assumptions, e.g., about technology efficiencies and costs or the temporal availability of variable renewable energies. Understanding the influence of all these parameters on the computed energy system design via direct sensitivity analysis is not easy for human decision-makers, since they may become overloaded by the multitude of possible results. We thus propose transferring an approach from explaining complex neural networks, so-called locally interpretable model-agnostic explanations (LIME), to this related problem. Specifically, we use variations of a small number of interpretable, high-level parameter features and sparse linear regression to obtain the most important local explanations for a selected design quantity. For a small bottom-up optimization model of a grid-connected building with photovoltaics, we derive intuitive explanations for the optimal battery capacity in terms of different cloud characteristics. For a larger application, namely a national model of the German energy transition until 2050, we relate path dependencies of the electrification of the heating and transport sector to the correlation measures between renewables and thermal loads. Compared to direct sensitivity analysis, the derived explanations are more compact and robust and thus more interpretable for human decision-makers.

Cite

CITATION STYLE

APA

Hülsmann, J., Barbosa, J., & Steinke, F. (2023). Local Interpretable Explanations of Energy System Designs. Energies, 16(5). https://doi.org/10.3390/en16052161

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free