Many real-world optimisation problems such as hyperparameter tuning in machine learning or simulation-based optimisation can be formulated as expensive-to-evaluate black-box functions. A popular approach to tackle such problems is Bayesian optimisation, which builds a response surface model based on the data collected so far, and uses the mean and uncertainty predicted by the model to decide what information to collect next. In this article, we propose a generalisation of the well-known Knowledge Gradient acquisition function that allows it to handle constraints. We empirically compare the new algorithm with four other state-of-the-art constrained Bayesian optimisation algorithms and demonstrate its superior performance. We also prove theoretical convergence in the infinite budget limit.
CITATION STYLE
Ungredda, J., & Branke, J. (2024). Bayesian Optimisation for Constrained Problems. ACM Transactions on Modeling and Computer Simulation, 34(2). https://doi.org/10.1145/3641544
Mendeley helps you to discover research relevant for your work.