Why we need a testbed for black-box optimization algorithms in building simulation

ISSN: 25222708
2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

When applying black-box optimization (BBO) algorithms there seems to be a lack of guidelines on which algorithm to select and how to properly tune their algorithmic parameters. Many benchmarks are conducted either on large sets of mathematical test functions or on few building simulation problems. This inhibits us from drawing generalizable conclusions valid over the entire domain of building energy optimization (BEO). As a consequence, we argue that BEO urgently needs a unified testbed for consistently benchmarking and researching BBO algorithms. We illustrate our point by conducting a Fitness Landscape Analysis (FLA) of several building simulation problems using EnergyPlus, a solar potential simulator and Fast Fluid Dynamics, and comparing it to common mathematical test functions. For a number of FLA metrics we can demonstrate that building simulation problems differ significantly. Furthermore, by benchmarking a number of BBO algorithms on a BEO and test function set separately, we show that algorithm performance depends on the problem set, thus leading to the conclusion that the domain of building simulation requires a dedicated testbed to facilitate the application of black-box optimization.

Cite

CITATION STYLE

APA

Waibel, C., Wortmann, T., Mavromatidis, G., Evins, R., & Carmeliet, J. (2019). Why we need a testbed for black-box optimization algorithms in building simulation. In Building Simulation Conference Proceedings (Vol. 4, pp. 2909–2917). International Building Performance Simulation Association.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free